Friday, November 17, 2017

3 Reasons that 0.999...≠1

1. If 0.999...=1, then ...999.0=-1

2. If 0.999...=1, then 2=1

3. You can construct a set for a number that is infinitely close to 1 but is still less than 1. Not sure what the speaker means by 0.999...=1 in some math systems and not in others. It's not like 2=1 in some math systems and not in others.

Disclaimer: I had nothing to do with the making of any of the videos I cited here, they were created by mathematicians on YouTube. I don't own those videos.

Friday, November 10, 2017

Privacy is REAL. Privacy MATTERS. Privacy is NOT dead.

So, recently a friend of mine posted something along the lines of privacy not existing anymore and that therefore if he posts a little bit of personal info about somebody else (including photographs of them) he's not doing any harm because the info is already out there. And the person in question was depressed and he was trying to spread awareness of depression in the process of sharing this bit of personal info, so he figured he was doing a net good.

Now, there are two problems with this assertion that he made.

1. He could probably have found photographs online of depressed people who had volunteered their info, rather than using info from someone who hadn't consented. If someone is too far gone in their depression to rationally consent, then one should get their consent in advance while they are not so depressed if possible.

2. We do still have privacy and it is still valuable and worth protecting and respecting. No one is watching us pee, and just because computer algorithms can predict most general things about us doesn't mean they're super duper precise all the time. I'm reminded of this every time I go on Facebook see all the posts that I have zero interest in. Or when I'm browsing the web in general the vast majority of ads are for things I don't care about at all.
If the algorithms were really that good at predicting people personally advertisers wouldn't need to cast such a wide net with their target audiences.

All the basic information about you is out there, and all of your online activity is too. But someone would probably have to have a LOT of patience and time on their hands and some hacking skills to follow your online trail and uncover all of your information. And that's only going to happen if someone has a motive to dig up dirt on you personally in the first place.

I'm guessing different companies don't always communicate your info with each other (they are competing with each other after all).

Also, just because the algorithms can track you doesn't mean there's a human being who's looking through the databases and making a note of it every time you personally put something on the web. A lot of that tracking stuff is probably automated, so the big brother who's supposedly watching you isnt even paying any attention to you because he can have his computer do it for him. Less work for him, and the computer won't judge you like he would.
Potential employers might be an exception to this perhaps to some extent, but even then they have limited time and they're only going to do a background check on you if you decide to apply for a job with them. A background check, NOT an investigation.

Also, just because information is technically available publicly doesn't necessarily make it "public information". You go to a grocery store and go check out. Everyone behind you in line can see what you're buying. And so can the cashier. Doesn't mean you want the whole world to know what you're buying, and if everyone in line behind you plus the cashier started gossiping about you based on your purchases, that would be weird and kind of creepy.

Privacy is real. Privacy matters. Privacy is not dead.

So the next time you start thinking that privacy isn't a thing anymore, try searching the web for pornography of yourself, or videos of you using the toilet. You probably won't find any.

Tuesday, November 7, 2017

Boolean value-Beliefs vs Probabilistic Beliefs and their Relationship to the Accountability of Scientists and Fact-Checkers

On the Pro-Truth Pledge's website, it says:

Misinformation is anything that goes against reality.  It can mean directly lying, lying by omission, or misrepresenting the truth to suit one’s own purposes. Sometimes misinformation is blatant and sometimes it’s harder to tell. For those tough calls we rely on credible fact-checking sites and the scientific consensus.

Now, I've noticed a certain question that seems like it probably comes up a lot from potential signers of the Pro-Truth Pledge. They ask, "How do you fact-check a fact-checking site?"

And the response, "By asking other fact-checkers to fact-check them."

Then they ask, "If a fact-checker checks another fact-checker, and finds a statement that the first fact-checker said to be false, how do you know whether the second fact-checker is right about that or if the first fact-checker was right after all?"

And the response: "check the scientific consensus."

And they ask, "Can you fact-check the scientific consensus?"

And the response "scientists can, but regular people don't have enough expertise and so they can't."

And then they ask, "then how do you know if the scientific consensus is correct?"

And the response, "because science has been pretty reliable in general compared to everything else so far, so scientific consensus is really unlikely to be wrong."

And then they ask, "But how do you know if scientific consensus is correct in this particular case?"

And then they also ask, "And how do we know that scientists never collectively decide to lie about their findings? Like, if the physicist community had conspired to hide some dangerous knowledge so that no one finds out the secret of how to make nuclear weapons in 1945, how would we know they were lying?"

Although I suppose in that case it could be argued that scientists basically never collectively decide to lie about their findings unless there's a REALLY GOOD REASON, because they're really smart and their findings are how they get paid and going through all that investigative work and then not getting to brag about finally solving the mystery afterwards and advance science further for all mankind really really sucks.

And they might also ask "And what if the scientific consensus is just plain wrong about something, simply because their current measuring tools aren't strong enough to get the right answer on a particular research question, and no one will realize it until maybe a century later when we have better tools?"
How would we know?

And the response? The Pro-Truth Movement doesn't seem to have made that response yet. And I know a lot of people are going to want one. Just saying "trust the scientific consensus" isn't enough. It's probably going to get people good, reliable results most of the time, but many laypeople won't know that in advance of signing the Pledge.

So I, a bonafide Pro-Truth Pledge signer and advocate, will now take it upon myself to provide that response. Bear in mind, the response is a bit complicated and depends on a decent amount of background knowledge of advanced critical thinking methods, in particular how you can apply probability theory to your own thought processes. I'm going to walk you through it and make this as simple and easy to understand as I can, so please bear with me.

Let's begin, shall we?

Let's start with the way a typical layperson who signs the pledge is likely to think. Here is something someone I know on facebook who had signed the PTP posted:

We can overcome some biases but not all because we, each of us, is always on the inside of our own selves looking out. We must come to agree as to rules to describe objective phenomena because we have no guarantee that our subjective perceptions of those phenomena are identical or that any of us is empirically correct in our perception. In fact, the odds are that neither they nor we are factually correct. Analogy: We both agree that the sky is blue, but the way your optical system processes color may not be the same as the way mine does? (Some of us can see more colors, some less and to a greater or lesser degree.) And to ice the cake, when we get down to the empirical reality -- the science of it -- we find that that sky is actually colorless and what we're perceiving as color (i.e., a pigmented object reflecting certain optical wavelengths) is actually a result of the prismatic effects of the atmosphere -- refraction vice reflection. We're fooled due to our own subjective limitations -- limitations that we can't be objectively aware of but that we can assume we have."

Let's break this down. Here is what seems to be the core claim of the above quote:
"We must come to agree as to rules to describe objective phenomena because we have no guarantee that our subjective perceptions of those phenomena are identical or that any of us is empirically correct in our perception."

The above argument has 3 component claims:
-we have no guarantee that our perceptions are consistent with each other
-we have no guarantee that our perceptions are correct (that they match reality).
-Therefore, we must agree on outside rules to evaluate whether specific beliefs accurately describe objective phenomena, rather than relying on our own perceptions.

These statements sound reasonable at first. Someone who describes all of their beliefs in terms of pure Boolean values (0 or 1, yes or no etc), would not be able to notice the flaw in the above argument.

But why shouldn't we use Boolean values to describe our beliefs? With Boolean values it's a simple 0 or 1, yes or no, true or false. And in reality, a claim is ALWAYS either true OR false.

Yes, but how much do you know about reality? Maybe the claim that "Sailor Vulcan loves cereal" in reality must necessarily be 100% true OR 100% false. But you don't know which it is.

Now suppose you got some finite amount of evidence, some clue as to what the answer to that question "Does Sailor Vulcan love cereal" might be. For instance, you discover that:
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."

You should DEFINITELY update on this evidence! Clearly this evidence favors the hypothesis that Sailor Vulcan doesn't love cereal.

But wait a moment. Even after seeing this evidence, you still don't really know whether Sailor Vulcan loves cereal or not. Sure, that hypothesis seems less likely now, but you still don't really know.

How much evidence would it take to really know?

Well you could ask me if I love cereal. And I might tell you "yes" or "no". But how would you know if I was telling the truth? Maybe I don't know if I love cereal. Maybe I haven't tried it before. Hey, it's possible, isn't it? So then you could ask me if I've eaten cereal before. And I can say "yes" or "no". 

How would you know if I was lying?

How do you know that cereal exists?

No really, how do you know?

Because you eat it for breakfast every day?

How do you know you eat it for breakfast every day?

Because you remember eating it?

How do you know you're remembering that correctly? Is there an experimental test or some other outside indicator that you can rely on to be 100% sure that your cereal-eating memories weren't confabulated/hallucinated?

Now, you might be tempted to give up at this point and say "Well that could never happen."

But how do you know that for certain? Haven't you been wrong before? A typical layperson isn't bothered by this question, but they don't really know why they aren't bothered by it. They will just reiterate that those hypothetical scenarios are ridiculous nonsense and obviously false, which is circular reasoning. I ask them, "How do you know that claim X is false?" and they come up with a counterexample in the literature or their own experience that shows claim X to be false. And then I ask "How do you know that the counterexample you just gave isn't false?" and they say "Because claim X is obviously false, therefore you can't falsify the examples that falsify claim X".

How do you know that for certain? 

I'm not bothered by this question any more than the typical layperson. However, I know (or at least I'm reasonably certain) of why I'm not bothered by it. The reason?

Because the chances of claim X being true are EXTREMELY low. I don't need to round off my estimate of those chances to the nearest whole number.
Yes that's right, it's really that simple.

In the case of whether Sailor Vulcan loves cereal, you could make an estimate in your head of how likely you think it is that Sailor Vulcan loves cereal. You would base this estimate on your previous baseline estimate modified by the evidence you have gotten after your baseline estimate was established. Maybe your baseline estimate of the probability that someone loves cereal is "Most people love cereal, and therefore in the absence of any more specific information about this individual's food preferences, I estimate a high probability (if you're really smart you'll give it a number, let's say 70% for now) that Sailor Vulcan loves cereal."

Then, you acquire the evidence:
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."
Let's say that the probability that this evidence is genuine and not me lying or being mistaken about my food preferences or you misremembering something I said or did in your presence etc. is really high maybe around 80%.

Since your prior probability that Sailor Vulcan loves cereal is 70%, that means that if you looked at 100 people who were sufficiently similar to me, you would expect the number of them who love cereal to be ~70 people, and the number of them who don't love cereal to be ~30 people. The prior probability of Sailor Vulcan being in the group of 70 people is 70%, and the prior probability of Sailor Vulcan being in the group of 30 people is 30%.

Now here's where things get really interesting. You can use the evidence: "Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal," to update your original mental probability estimate. To do this, simply multiply your original mental probability estimate by the probability of the evidence given that original estimate.

For instance, multiply your estimate of the likelihood that "Sailor Vulcan loves cereal" by the likelihood of finding out that
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."

Don't look at the numbers. You can do this without consciously doing any math!!

Imagine the hypothetical you that estimated a 70% baseline likelihood that Sailor Vulcan loves cereal. Put yourself in their shoes, try to imagine what that version of you is thinking and feeling, what their psychological state is.

Imagine your psychological state when you believe that Sailor Vulcan is highly likely to love cereal.

Now imagine precisely how surprised or unsuprised you are when you find out that
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."

Now imagine your psychological state after this. How likely do you feel it is that Sailor Vulcan loves cereal? Do the chances feel higher? Do they feel lower?

Remember this feeling. Memorize it. Pay attention to it. This feeling of being surprised or unsurprised by the evidence tells you what your mental likelihood-estimates were before encountering evidence that changes them. And if you know in advance how surprised or unsurpised you would be to find out that a certain claim is true or false, then that level of hypothetical surprise or unsurprise IS your current mental likelihood-estimate that said claim is true or false. You don't need to round this estimate off to the nearest whole number. This mental estimate IS your currently held belief.

Now, you're never going to be able to get infinite certainty in this new kind of belief. 0% and 100% are not actually probabilities, but absolute certainties, aka "estimates of infinite certainty", so they can't be updated by the evidence. anything multiplied by 0% is 0%, and anything multiplied by 100% is still itself (100% equals 1 and 0% equals 0, percentages are just treating all numbers from 0 to 1 as improper fractions with a denominator of 100).

Do you see why it was often so hard to change your mind before? When you round up all your beliefs to the nearest whole number and treat them like they are all 100% likely to be true or 0% likely to be true, it becomes really really hard to change your mind. It's also worth noting that "50% likely to be true" isn't quite the same as "I have no idea whatseover". With a  50% mental likelihood-estimate that a claim is true, you would not be surprised if the claim was true, nor if it was false. But if you really had no opinion, if you really had no idea and no way to make a guess besides eenie-meanie-minie-moe or flipping a coin, then it wouldn't be at 50%. You simply wouldn't have any idea how surprised or unsurprised you would be at all.
Since you don't have an infinite amount of evidence to draw from, there will always be some level of uncertainty in whether our beliefs are true or false, and in how closely our perspectives match reality.
Just because different people's brains are unreliable sometimes in slightly different ways doesn't mean that you should just always ignore your brain and only listen to experts. Experts have limited data sets too, and they aren't with you 24/7 to make all your decisions and judgements for you in your everyday life. Scientific consensus is definitely more reliable than individual judgements, but science is used more for generalized and reproducible knowledge and there are types of knowledge that aren't scientific, like the fact that my keys are in my pocket, or the fact that i am currently writing a post on my blog.
If you wanted to rely on scientific consensus to figure out where your keys were, you'd need to give the scientific community quite a bit of data about yourself and your personal organizational habits, and then you'd probably need to have a large representative sample of everett branches with a "you" in it who have misplaced their keys sometimes and then they could do a behavioral study on them and then make a prediction of where your keys are most likely to be right now based on that, having each "you" look in each location, and then recording the number of yous who found the keys in each particular spot.
Or at the very least, (since this is real life) you need a large sample of people who are sufficiently similar to you to habitually tend to leave their keys in the same spots as you do when they lose their keys.

You dont need a 100% certain "guarantee" that your perceptions are accurate to act on them or to use them as models to help make further predictions. Having a finite amount of data/observations means you can have a finite amount of likelihood/expectation that your hypotheses are true or false.

And this brings us full circle back to the Pro-Truth Pledge, and to our original question: how can we know if and when scientists or fact-checkers collectively lie or are collectively wrong?

To be honest, it is my opinion that the Pro-Truth Pledge oversimplifies rational epistemology by framing rational beliefs in terms of Boolean values. Potential states of reality can be Boolean, but a rational belief in the absence of infinite evidence will always be merely probabilistic.
However, I think it does this for a very good reason. It seems most likely to be because most people don't have the time or desire to learn any probability theory, and yet we still need a way for regular people to distinguish between:
1. claims that are significantly higher than 50% likely to be true based on the evidence available
2. claims that are significantly lower than 50% likely to be true based on the evidence available.

Most people already use Boolean values to describe their beliefs about a thing, rather than using a percentage that measures how surprised or unsuprised they would be by certain hypothetical observations they could make about that thing.
And it would be a lot harder to start a movement educating everybody about probability BEFORE starting a movement getting people to value truth more.

Rest assured, using Boolean values can still be good enough a lot of the time. Generally, I would say that Boolean value-beliefs are probably good enough for most purposes when you're dealing with subjects that have larger data sets, because doing more tests tends to increase the disparity between
1. the probability of a claim being true given the evidence available
2. the probability of that same claim being false given the evidence available.

Unfortunately, using Boolean value-beliefs makes it harder to independently evaluate the reliability or honesty of particular fact-checkers and scientists/scientific organizations. If you're using Boolean value-beliefs, then when it comes to science it's really hard to change your mind with evidence, because you won't be able to form an opinion until the scientists have already gathered enough evidence to make a really high or really low likelihood estimate of whether a claim is true or false. Checking for yourself would mean gathering the evidence that had been used to form your scientific consensus-given baseline estimate in the first place.
If you're using Boolean value-beliefs, any probability estimate too close to 50% is simply labeled "I don't know". So no matter how much evidence you gather, you're still basically stuck there until you've gathered enough evidence to be overwhelmingly on one side or the other. And even then it's hard to update, because you've already ran so many tests, and they all resulted in "I don't know". If all the tests you ran felt "Inconclusive" it's going to be hard to make that jump to "this is really likely/unlikely to be true". But if you can't update your beliefs incrementally with the evidence, then you can't properly check a scientist's work. You would only be able to evaluate whether new results matched with old well-established results, and anything really surprising would be thrown out the window simply because it is really surprising.

So laypeople who don't have any knowledge of probability will just have to take our word for it and hope that fact-checkers and science is trustworthy.
Which is a bit of a bummer, because if they were thinking about their beliefs in terms of percentages of expectation, as levels of surprise/unsurprise, rather than just a yes/no, they would be able to update incrementally. And that would allow them to actually be able to form opinions based directly on the evidence available to scientists which they publish in journals, rather than just taking their word for it.

And the thing is, while scientific consensus is a LOT MORE reliable than individual perception, it still isn't infallible, and in the infrequent and unlikely event that the scientific consensus gets something wrong, a layperson with no knowledge of probability would be unable to notice it. This could have consequences for new or emerging fields of study which are highly relevant where there aren't very large data sets yet but people still want to know about. If there's any public reporting being done on a relatively new field without many or perhaps without any established well-supported theories, then a layperson who uses Boolean-value beliefs instead of probabilistic beliefs is going to be unable to form any rational opinions about it. And if it's a field that they really need to know about for their own safety or wellbeing or the safety or wellbeing of society (such as nutrition or Artificial Intelligence Safety), then not being able to form an accurate understanding of the currrent state of the field is not a good thing.

However, every cause has to start somewhere, and since currently most people aren't interested in learning about probability, we'll just have to stick with the Pro-Truth Pledge as it is for now.

Friday, October 27, 2017

Eight Literally Zero-Effort Halloween Costumes

Don't have a costume this year and don't have the time or energy to make one? Is buying a premade costume a tad pricy? Doesn't matter, I'll give you some of mine.  Here you go:

1. Lord Voldemort using polyjuice potion to disguise himself as a muggle
2. A boggart
3. A secret agent disguised as you
4. A sailor scout using the transformation pen
5. A clone of yourself
6. Your long-lost identical twin
7. Your time-traveling descendant from the future
8. A superhero's secret identity

Thursday, October 26, 2017

"Every single username under the Sun is already taken."

 **BowsiesCassie** is pestering JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248

**BowsiesCassie**:You need to stop doing that.

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: Stop doing what?

**BowsiesCassie**: You need to stop relying on God to solve all your problems.

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: Why shouldn't I? It's better at solving them than I am.

**BowsiesCassie**: That's why you need to stop. You are gaining false social credit through your over-reliance on God. You should just be yourself.

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: Myself is a nobody. There are trillions and trillions of people in the Universe. Anything I can do, someone else has already done, or is already doing it, or is going to do it before I can. Face it, I'm irrelevant.

**BowsiesCassie**: You're not irrelevant to me.

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: But I will be. The more people you meet, the more friends you'll have who can do everything I can do but better.

**BowsiesCassie**: Really? Because I think these other future friends you mention are probably out of my league right now. As the saying goes, it's bad enough to compare yourself to Isaac Newton...

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: I know that saying.

**BowsiesCassie**: "Every single username under the Sun is already taken."

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: That sounds rather apt. Who are you quoting?
**BowsiesCassie**: Natalie Tran. She was a famous comedian of Ancient Earth. Seriously, do you not pay attention to anything they told you in elementary school?

JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: I didn't go to elementary school. It's not like attendance is mandatory.

**BowsiesCassie**: Ugh. I give up. Come find me when you're ready to grow up and start living life with the rest of us instead of whining about not being the best at everything.


JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248: Cassie...

**BowsiesCassie** has stopped pestering JohnBrown1-23841-23481-23481-230481-23481-204812-3481-248

Tuesday, October 17, 2017

A Tale of Four Moralities

This is a children's story I wrote today, about four children with four different moral philosophies. There are some subjects discussed in this story that some might consider mature subjects, but I made sure that there was no cussing, no violence, no sex etc, and did my best to handle the subject matter in a manner appropriate to children's ears. Please feel free to tell me what you think of it. Thanks!


Eye-for-an-Eye Ivan was very angry.
His teddy was stolen.

Ivan decided.
He would catch the thief and steal from them.

"This will pay them back," said Ivan. "Serves them right."

Golden-Rule Goldie was very happy.
It was her birthday.
Her papa gave her a teddy.

Goldie decided.
She would give a gift to her papa in return.

"It was nice of him to give me a teddy," said Goldie.
"This is the least I can do."

The next day, her teddy was gone.

Minimize-Suffering Minnie was very sad. Someone was stealing teddies from her friends.
She looked at her teddy.
Would she be next?

Minnie decided. She would find the stolen teddies.
And she would return them.

"It's the right thing to do," said Minnie.
"This way, no one will be missing their teddies. Not anymore."

The next day, her teddy was gone.

Maximize-Flourishing Maxie felt guilty, but hopeful.
Earlier, his mama told him something sad.

"The other neighborhood is poor.
Kids there don't have teddies."

So Maxie decided.
He would steal teddies from his friends. He would give them to the other neighborhood.

"It's the best thing I can do," said Maxie. "My friends can afford new teddies. But the poor kids can't."

So Maxie stole teddies from his friends,
and gave them to the other neighborhood.

This made the kids there happy.
But his friends were sad, because now THEY had no teddies.

The next day,
the sad kids went with their parents to the teddy store,
to buy them new teddies.
But the store was all sold out of teddies.

"It's been hard to sell teddies in this town," said the store clerk. "Many poor people can't afford them. And many rich people already have teddies."

"Why not give teddies to the poor?
For free?" asked Maxie.

"We tried that before," said the clerk.
"It didn't work.
A long line of people came for teddies.
Many poor people can't afford cars.
When they got here, they were last in line. Then they got to the front of the line.
But by then, we were out of teddies."

"Then why give teddies to the rich?" asked Minnie.
"Can't you tell them no?"

"Other rich people paid us to give teddies for free.
They can't do that all the time.

"We have to sell to the rich, too.
Otherwise, we can't afford to make teddies.

"At all."

"Why not?" asked Goldie.

"We have to pay for the stuff to make the teddy," said the clerk.

"Why can't you just get that stuff for free?" asked Maxie.
"Then you could give teddies, without being paid."

"Maxie," said Maxie's mama. "There aren't enough teddies for everyone.
There isn't enough stuff to make that many."

Maxie began to cry.
"I wanted to make more people happier," he said.
"I thought by giving teddies to poor kids, I could make more of the town happier. There are more kids in the poor neighborhood.
And they had no teddies."

"YOU stole our teddies!" Ivan accused. "You should be punished.
Someone should steal a teddy from you."

"I'm sorry!" said Maxie.
"I don't have any teddies.
I gave them to the kids in the other neighborhood."

"Maybe if you asked nicely, they would return our teddies?" asked Goldie.

"No," said Minnie.
"They would feel the same way we did, when the teddies were stolen from us.
They don't know the teddies were stolen.
If we tell them, they won't know we're telling the truth."

No one was sure what to do.

Finally, Maxie said,
"We need to find a way to make more stuff.
That way, there will be enough to make teddies for everyone."

"And if we can't do that?" asked Minnie.

"I don't know," said Maxie.
"But we have to try!"

"Why should we help everyone?
The poor kids have never helped us," said Goldie.

"What else can we do?" asked Minnie. "We can't steal the teddies back."

"The poor kids didn't do anything wrong!" said Ivan. "We shouldn't punish them!"

"Maybe if we find a way to make more stuff," said Maxie.
"the poor kids will have enough to give you something, in return."

"Okay," said Goldie. "I'll help."

The kids talked.

The parents looked at each other.

"Do you think they can do it?" asked Goldie's papa.

Ivan's mama laughed.
She thought it was a joke.

Minnie's papa sighed sadly.

And Maxie's mama turned to the kids and said,
"If you're kind and just,
understanding and giving.
If you listen to each other, and to others.
If you work hard and do your best.
If you learn, grow and become stronger.
If you are brave, and never give up.
Then, maybe, you will find a way."

They would find a way to make more stuff. Someday.

And so they began their search.