Has anyone dug into GEMS the vote counting system that uses decimals instead of whole numbers? Any programmer out here will know there is only one reason to use decimals when counting votes: to cheat! Link from 2016 in comments.
(media.patriots.win)
🛑 STOP THE STEAL 🛑
posted ago by spaz
+194 / -0
No, there's no reason they ever should have used decimals to count votes.
Decimals, also called "floating point" numbers in programming are a common source of accumulated and other types of error, in particular because of how the usual representation (IEEE 754) works.
IEEE 754 is like scientific notation, with a mantissa (or significant, the 1.xxxx number), an exponent, and a sign (+/-). Since it's a binary format, only fractions which are sums of products of 1/2 (1/2, 1/4, 1/8, 1/16, ...) can be represented. For example, you could represent (1/2 + 1/16 + 1/32). Some common values, like 0.1 are actually infinitely repeating decimals and cannot be truly and accurately represented, which accumulates errors over each calculation. (A different seldom used decimal representation called "fixed point" also suffers from this problem.) Also, the steps between subsequent values the floating point value can represent increases as the exponent increases. Both of these problems affect all forms of IEEE 754, both "floats" (the 32-bit format), and "doubles" (the 64-bit format). This is why any code dealing with money written by any competent person represents monetary values with integers, usually as separate numerator/denominator values or with units as cents or fractions of a cent depending on the accuracy system needs.
Integers in computers come in many sizes, usually the "default" is still 32-bits, and range between -2,147,483,648 and 2,147,483,647 (when signed, it's 0-4,294,967,295 when unsigned). I could see a programmer saying that's insufficiently sized depending on population growth, but there's still not a reason to use doubles. 64-bit integers represent -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 (signed) or 0 to 18,446,744,073,709,551,615 (signed). Reasonable databases and network protocols support 64-bit integers without a problem, so there's still no real reason to use doubles.
We need an open source voting software that we the people can review and improve. We must have actual transparency to restore trust in our elections.
I concur with the assessment. I had made another post here. Basically when you design the database and the data format for various fields, for things that are definitely numbers and can never ever be floating points - there is no reason to use floats. Infact it becomes inefficient if you use floats instead of integers - both in processing and storage.
So either a very inexperienced person designed this data format, or it was done specifically to mess around with the data.
BTW, I will say another thing. If the idea is to legitimately implement "weighting" for whatever reason (perhaps they want to use the same system for board elections and weighting becomes important there), you wouldn't represent votes as fractions. Rather you use integers, and then you assign a floating point weight field per candidate.
So why would they actually use floating points for votes? Its not because of weighting, its because they want to obfuscate the fact that they are taking away votes from one candidate. When you look at the log, there is no way to easily notice instances where votes are being taken away. You have to run a script or something to determine this. I would say this is the real reason for floating points.
This goes beyond weightage.
common man, its much easer to type decimal(15,6) than int, no one ever uses int. votes are fluid like gender --corn pop.
No shit. This is why the fraud was so obvious. The 3:5 split wasn't enough to let Biden win.
Well, cheating and fractional compound interest.