Correct; essentially in terms of machine learning, you still, like a child, have to discern what it should be learning, from what it shouldn't be learning.
Think about sending cars around a race track with bends.
All a machine learning algorithm would initially need is some information about how close it is to the nearest obstacle (E.g. Walls) and know that it needs to move this object.
Give it a sample of say 10000 cars, and you'll find that on the first pass of the data set, maybe 95-100% of those cars never make it past the first bend.
However; you probably have a good few cars that have the correct positioning, and were the closest to navigating that first bend.
You can therefore tell the system that "This is what a successful attempt looks like".
The system then uses this information to base the next 10000 cars on.
The system at this stage doesn't actually need to be entirely aware of what the end outcome is; it has just learnt (or begun to learn) that it should probably be forcing this object to avoid contact with any walls in all directions, whilst not doubling back on its previous path.
You keep refining the system initially, and at some point you are essentially capable of defining the end conditions.
The AI will then be capable of determining what should be considered a 'success' (and therefore what it should base its next set of data on) and what is a failure (usually anything else) because if something is true; the opposite of this must be false.
E.g. If Vehicles are not within Boundary X1/Y1 to X2/Y2, they must be outside these coordinates. You don't have to tell them what is both true and false if you've conditioned the data set such that something that is true; cannot be false.
The amount of computational power that super complex systems require is dumbfounding, especially because large amounts of the technology (but not all of it of course) is still in its infancy.
That is probably a very long way of essentially saying:
You need to provide:
A Starting Data Set
A Set of Ideal outcomes (based on the results of the first data set)
and a few parameters.
Such that a system is capable of actually understanding what the hell it is meant to be doing.
AI and Algorithms love to be peddled by snakes as some obfuscated system that 'no one really knows'; the reality is that for it to ever get to that stage, you will always know how it started.
You can never escape that innate bias from the parameters that are set and the determinations made by this.
They did hit on some truths in that interview; not helped by the dims who used a fucking hexadecimal calculator in an 'algorithm' example; which isn't anywhere close to how it would function, not least because of the coding implications, but also because of the value limitations given that assuming you would need to hold a VAST sum of data, having an 8-bit Hexadecimal only be capable of storing value up to 255 tends to limit your options quite severely.
But I won't digress any further on that point.
What I mean to say is they are correct; when they say that algorithms are complex - and it is that very statement which should be used to catch them in their trap. The more complex an algorithm becomes, the more parameters and data it has been fed, and the more precise its outcomes have been attuned to allow it to begin to understand 'nuance' in speech for example.
That lie is perfect because there's still so many people ignorant enough to believe it.
Correct; essentially in terms of machine learning, you still, like a child, have to discern what it should be learning, from what it shouldn't be learning.
Think about sending cars around a race track with bends. All a machine learning algorithm would initially need is some information about how close it is to the nearest obstacle (E.g. Walls) and know that it needs to move this object.
Give it a sample of say 10000 cars, and you'll find that on the first pass of the data set, maybe 95-100% of those cars never make it past the first bend.
However; you probably have a good few cars that have the correct positioning, and were the closest to navigating that first bend.
You can therefore tell the system that "This is what a successful attempt looks like".
The system then uses this information to base the next 10000 cars on. The system at this stage doesn't actually need to be entirely aware of what the end outcome is; it has just learnt (or begun to learn) that it should probably be forcing this object to avoid contact with any walls in all directions, whilst not doubling back on its previous path.
You keep refining the system initially, and at some point you are essentially capable of defining the end conditions. The AI will then be capable of determining what should be considered a 'success' (and therefore what it should base its next set of data on) and what is a failure (usually anything else) because if something is true; the opposite of this must be false. E.g. If Vehicles are not within Boundary X1/Y1 to X2/Y2, they must be outside these coordinates. You don't have to tell them what is both true and false if you've conditioned the data set such that something that is true; cannot be false.
The amount of computational power that super complex systems require is dumbfounding, especially because large amounts of the technology (but not all of it of course) is still in its infancy.
That is probably a very long way of essentially saying:
You need to provide:
A Starting Data Set
A Set of Ideal outcomes (based on the results of the first data set)
and a few parameters.
Such that a system is capable of actually understanding what the hell it is meant to be doing. AI and Algorithms love to be peddled by snakes as some obfuscated system that 'no one really knows'; the reality is that for it to ever get to that stage, you will always know how it started. You can never escape that innate bias from the parameters that are set and the determinations made by this.
They did hit on some truths in that interview; not helped by the dims who used a fucking hexadecimal calculator in an 'algorithm' example; which isn't anywhere close to how it would function, not least because of the coding implications, but also because of the value limitations given that assuming you would need to hold a VAST sum of data, having an 8-bit Hexadecimal only be capable of storing value up to 255 tends to limit your options quite severely. But I won't digress any further on that point.
What I mean to say is they are correct; when they say that algorithms are complex - and it is that very statement which should be used to catch them in their trap. The more complex an algorithm becomes, the more parameters and data it has been fed, and the more precise its outcomes have been attuned to allow it to begin to understand 'nuance' in speech for example.