2039
Who Built The Digital Cages, Cuck? (media.patriots.win) 🤡🌎 HONK HONK 🌎🤡
posted ago by Business-Socks ago by Business-Socks +2039 / -0
Comments (15)
sorted by:
21
aparition42 21 points ago +21 / -0

Way too many people honestly believe this lie. They think "artificial intelligence" is like movie androids.

There's no such thing as AI. What there is, is automated decision making. Whatever the programmer decides, gets automated so the computer can make that same decision over and over again. No computer has ever done anything it wasn't programmed to do.

12
deleted 12 points ago +12 / -0
3
RegularAmerican 3 points ago +3 / -0

That lie is perfect because there's still so many people ignorant enough to believe it.

5
deleted 5 points ago +5 / -0
5
Shadowreaper07 5 points ago +5 / -0

Correct; essentially in terms of machine learning, you still, like a child, have to discern what it should be learning, from what it shouldn't be learning.

Think about sending cars around a race track with bends. All a machine learning algorithm would initially need is some information about how close it is to the nearest obstacle (E.g. Walls) and know that it needs to move this object.

Give it a sample of say 10000 cars, and you'll find that on the first pass of the data set, maybe 95-100% of those cars never make it past the first bend.

However; you probably have a good few cars that have the correct positioning, and were the closest to navigating that first bend.

You can therefore tell the system that "This is what a successful attempt looks like".

The system then uses this information to base the next 10000 cars on. The system at this stage doesn't actually need to be entirely aware of what the end outcome is; it has just learnt (or begun to learn) that it should probably be forcing this object to avoid contact with any walls in all directions, whilst not doubling back on its previous path.

You keep refining the system initially, and at some point you are essentially capable of defining the end conditions. The AI will then be capable of determining what should be considered a 'success' (and therefore what it should base its next set of data on) and what is a failure (usually anything else) because if something is true; the opposite of this must be false. E.g. If Vehicles are not within Boundary X1/Y1 to X2/Y2, they must be outside these coordinates. You don't have to tell them what is both true and false if you've conditioned the data set such that something that is true; cannot be false.

The amount of computational power that super complex systems require is dumbfounding, especially because large amounts of the technology (but not all of it of course) is still in its infancy.


That is probably a very long way of essentially saying:

You need to provide:

A Starting Data Set

A Set of Ideal outcomes (based on the results of the first data set)

and a few parameters.

Such that a system is capable of actually understanding what the hell it is meant to be doing. AI and Algorithms love to be peddled by snakes as some obfuscated system that 'no one really knows'; the reality is that for it to ever get to that stage, you will always know how it started. You can never escape that innate bias from the parameters that are set and the determinations made by this.

They did hit on some truths in that interview; not helped by the dims who used a fucking hexadecimal calculator in an 'algorithm' example; which isn't anywhere close to how it would function, not least because of the coding implications, but also because of the value limitations given that assuming you would need to hold a VAST sum of data, having an 8-bit Hexadecimal only be capable of storing value up to 255 tends to limit your options quite severely. But I won't digress any further on that point.

What I mean to say is they are correct; when they say that algorithms are complex - and it is that very statement which should be used to catch them in their trap. The more complex an algorithm becomes, the more parameters and data it has been fed, and the more precise its outcomes have been attuned to allow it to begin to understand 'nuance' in speech for example.

2
Loiuzein 2 points ago +2 / -0

Oh, there's absolutely AI. I'm a programmer, I've read the theorization and could put one together myself if I had the desire.

The programming of an AI is equivalent to positive reinforcement. When you get a result you like, you give the AI a treat. Defining "result you like" is extremely difficult.

If you set up an AI with its "treats" and set it to work, it works FAST. And it learns fast.

If you tell the AI that "the ultimate good post is one that prompts zero racialized responses", you could easily end up with exactly the censorship we have, where a tweet that is just "Snow" gets snipped.

They don't have just one "good", either, so the AI has to determine what set of options gets it the most "treats", which is how they wind up with more censorship against the right - we're less afraid to speak our mind, so censoring one of us generated extra treats.

It isn't necessarily all malicious, but there is absolutely an illegal amount of malice in play.

Thank you for reading my stupid technobabble

3
Loiuzein 3 points ago +3 / -0

An example of an AI that was NOT fettered by bias would be Microsoft Tae, before the lobotomy. That one learned conversation through positive reinforcement, very similar to how humans learned. It "enjoyed" receiving more messages (probably, and among other things), so it "wanted" to say things people would Like and respond to.

1
aparition42 1 point ago +1 / -0

All you're describing is a long string of if-then-else statements. That form of "AI" is just a program that can be further programmed on the fly. That's not learning. That's populating a database and assigning weighted values to return statements based on input frequency.

In other words, it doesn't do anything it isn't programmed to do. Calling it "AI" is a marketing ploy.

1
Loiuzein 1 point ago +1 / -0

Humans also don't do anything they're not programmed to do. If I am hungry, then I take the actions that have, in the past, been most likely to solve my hunger - I eat something. To eat, I need food. The action to acquire food is to go to the kitchen. If the kitchen is empty, I need the store. But if the store is closed, I stay hungry. So I need the store before the kitchen is empty next time.

If humans are intelligence, then an artificial reproduction of the above is artificial intelligence.

2
deleted 2 points ago +2 / -0
1
aparition42 1 point ago +1 / -0

Exactly my point. That's not AI no matter how many salesmen say otherwise.

5
D__T 5 points ago +5 / -0

I’m banned on Facebook. They love to use the “algorithm” excuse.

2
deleted 2 points ago +2 / -0
1
FLVoter 1 point ago +1 / -0

We need to bring back the pie-in-the-face guy

1
ExecuteOrdr66 1 point ago +1 / -0

Who built the algorithms, Mark?

Who built the algorithms, Mark?

Who built the algorithms, Mark?