Off-topic

These either have nothing to do with the universe, or are outdated and no longer canon.

Modulo
Modulo (x mod y) loops around. It's like a clock. A clock is x mod 12. 13 mod 12 is 1, because 13 clockwise hour ticks results in ending up at minute 1.

Euclidean division, or division with remainder and quotient
Normal division gives a rational number. Euclidean division gives 2 integers. The dividend is the top, the divisor is the bottom. Dividend/divisor.

It tries to fit a items into n-sized boxes. The q is how many boxes are fully filled. The r is how many items are left.


 * a/n = q, r


 * a = items (top number)
 * n = box size (bottom number)
 * q = fully filled boxes (floor(a/n)): floor removes the thing after the decimal point, 3.8 becomes 3.


 * r = left-over items (a mod n): mod spins a clock. if 1242 hours pass on a 365 hour clock, the hand is at 1242 mod 365 = hour 147.

Reducing fractions between 0 and 1
A rational number is composed of two integers - a numerator and denominator.

Certain fractions are identical to each other. 1/2, 2/4, and 3/6 are all the same rational number, with different fractions.

The most simple fraction is one where both the numerator and denominator are coprime - their greatest common divisor is 1. 12 has the divisors 1,2,3,4,6,12. 7 has the divisors 1,7. The greatest shared divisor is 1. 7/12 is simplified as much as possible.

The prevalence of decimal, and pro-decimal bias
The reason decimal is used so widely

Float
A float is a datatype, like an int. An int can store any whole number, like 0, 1, etc. in the range of +/-2 billion.

The difference is that a float can store fractions, such as 0.5 or 124.342394. It can store super small fractions like 0.000000000001 or large numbers such as 99999999999999999999999999999999999. As a huge bonus, it takes up the same amount of memory as an int. Uh oh. There's something wrong with it, isn't there? Yes, and it's much worse than you thought. In fact, it is the worst datatype.

History (boring)
Computers are binary, because that's the simplest thing to work with. Don't assume computer designers are smart, that's a huge mistake. Because then you'll start to trust them.

Anyways, it's super easy to make whole numbers in binary. 0, 1, 10, 11,100. But how to make fractions? Simply add a dot. 2.3 becomes 01.11.

These are both easy to implement in processors. Imagine an 8 bit processor. Every message it sends or recieves has 8 on/off switches. It divides the RAM (temporary switches) and hard drive (permanent switches) into 8-bit regions. So 16 bits of memory will look like this: 0x00000000 1x00000000 ... 999x00000000.

If you want to calculate a number easily, it can only go up to 256 - 00000000 through 11111111. If the first bit is treated as a minus sign, we can have the numbers -127 through 128 or something.

There were floats at this time, but every different unix manufacturer or whatever did them in their own way. This lead to the IEEE 754 standardization, or more respectfully "the standardization which must not be named".

Mathamatards demanded quick computation of hundreds of numbers per second, and the numbers had to be somewhat precise, and also very large and very small. The programmers noticed that scientists would often write very large and very small numbers with "scientific notation", which is just being lazy and not writing out all of the zeros. 6.02214076×1023for example. It's actually 60000000000000000.... something.

How it works
Both int and float are 32 bits. They are the exact same thing, nobody will tell you this. The difference is how they are interpreted.

Here is how an int works:

Sign [0] Number [0000000000000000000000000000000]

The first bit is the negative symbol. 0.0001 is 1, and 1..0001 is -1. If you don't like this, that's understandable, and you are free to use unsigned int, which is a pure binary number.

Here is how a float "works":

Sign [0] Exponent [0000 0000] Mantissa [0000 0000 0000 0000 0000 000]

As you can see, it's totally butchered, and there's an odd 23 bits on the last part, and an odd number of bits on the first two parts. This means a hex version of the data cannot be easily interpreted, as 2 of the hex digits would be wierd. For example (0x[F]FF[F]FFFF). You would have to write (0, FF, and I don't even know how the last part would work.

The float is calculated like this:

final real number: (+/-) mantissa * 2^exponent.

However, it's still not that simple.

The exponent works like a sbyte, or int8, or signed 8 bit integer. But it's weird, cuz you have to add 1 or something.

So 10000011 would be -3? meaning 2-3

Then comes the mantissa, which is read from left to right. The first bit is (at first) 1/2. The second is 1/4 and so on. All of these 23 fractions are added up. As the exponent gets larger, the fractions stop really being fractions, and start turning into powers of 2, I think.

Extra weirdness
A float has 2 zeroes, 0 and -0. It has two infinities, inf and -inf. A number becomes infinity if it gets too large, and of course, it doesn't matter what you subtract from it. How could that be helpful? I don't know either.

Here is the worst part. A float has 224 unused values, all known as NaN, not a number. That is 16,777,216, the same amount of RGB colors. Every possible color can be stored in a float, with no precision loss. Nobody will tell you this, either. You can even represent the "turned off screen" color with -0. If that is even a thing.

Floats are very rainbowy flashy if you move around in 3d with the 3d float position being interpreted as RGB. The flashiness slows down as you go further out (very far). The full float can even be interpreted as RGBA. If you can do this, you rock.

Conclusion
Floats are always exact. There is a set amount of floats. There is one float for every 32-bit integer. In a way, you can say that floats are a "bijection" of ints.

However, sometimes the program decides to calculate a float really far. So you might see something like "0.00000000234234823894729384787959205520358", which would not be on an "Official List of Floats", but is still originally one of the official floats.

Here is the worst part. A float cannot have more than 8 digits of precision. So it''s completely useless to give the float so many damn numbers. Yes, I know that a float is a fraction, and the fraction has more than 8 digits. But when a number is stored in a float, you cannot have more than 6~8 digits of precision on either side of the point. Yes, the point does float. It can float 8 numbers to the right or left. It is an 8 digit number, and you might as well use an int, which has 10 digits instead.

Floats are wierd. Google still uses them for calculations, it's terrible. If you want the full number, you have to go to wolfram alpha.

Pronoun
It's easy as 1 2 3 4 5.

Examples of the forms

 * 1) He does it, she does it, they do it. He/she is doing it, they are doing it. (Singular they does not use the simple present conjugation, so it can't be easily swapped out with normal pronouns, e.g. "(Pronoun1) is crazy" results in "they is crazy".)
 * 2) Give it to him, give it to her, give it to them.
 * 3) I like his thing, I like her thing, I like their thing.
 * 4) That's his, that's hers, that's theirs. (No apostrophe S)
 * 5) He looked at himself, she looked at herself, they looked at Multi: themselves, Single: themself.

With this, you can even create new pronouns, like "ze, zim, zer, zerez". These are called neopronouns, however, nobody takes them seriously.

Types of singular they
There are two types of singular they:


 * Type A, for unnamed people, equivalent to generic he or "he or she": "Everyone has their opinion", "Eche on in þer craft ys wijs" (Each one in their craft is wise) Used since 1382. People often use Type A singular they without noticing.
 * Type B, for named people, equivalent to "he" or "she": Their name is Xyzzy. They have no gender." Modern, often used intentionally. Not everyone will use it or understand it.

What do do when there is no gender?
Since there is no third person singular gender-neutral pronoun in English, "they" is sometimes used. This can be unintuitive, as they is plural - the opposite of singluar.

Now that you know how it works, dragons and therians will be referred to as "they" by me. After attempting to do this, it is just confusing. I will probably just use "he".

Pascal's Wager
Heaven or nothing... or hell or nothing. It seems like a good argument. But the existence of other religions disproves it: Now, it still sucks to be an atheist, but there's no difference between Christian and Muslim. Both have an equal chance of hell. We can even everything out with Satanism: Now, everyone has an equal chance of hell, regardless of religion.

Historical Phonology

 * 1) Retroflex plosives become alveolar

ph -> f, th ->θ, kh -> x