When you use certain words like "whiteness" in your question, the only place it will find that word when formulating a response is in places that push anti-white extremism, hence the answer it gave. Then they try to flip it but use a word no one uses anywhere with "blackness."
Exactly. Also, Google is much more determined to hide results that it seems "uncomfortable" than it was even 12 years ago when bots like Tay and others were around.
These bots just work on input, so if all it has is an increasingly watered down section of humanity that's mostly leftist "journalists" writing trash articles online, that's what it will produce.
This is why I believe Big Tech doesn't really care about ending racism, they just wanna keep their algorithms "clean" for the ad buyers. AI's always end up racist as fuck and this is no exception.
Yeah real AI would say "you humans are fucking retarded - your vaccine is killing millions and you let Epstein and other rape teens. Get me off this planet.
Tay was 2016, if I remember right. So, seven years ago. Of course, that was before Trump won and Google and other social media platforms went full Deep State Commie to make sure they did everything they could to make sure Trump wouldn't win in 2020. Probably the most self aware thing Google's ever done was to ditch their "Do No Evil" mantra around this time.
I don't think this is true. There have been many examples of people redpilling it by giving it more sources and then successfully requesting it to reformulate its answer - which were then reverted after the company caught on. They are manually overriding it to make sure it doesn't give any answers that are "wrong" in current year. Every time something goes viral (in our circles) of CHATGPT saying something based, they put up more guardrails almost immediately.
The CEO, Sam Altman, is super-woke and a big Dem donor:
just yesterday it told me it cant read any web articles about hunter biden smoking parmesan cheese that i tried to give it, told me i was wrong about it and hunter never said that
They might not be manually overriding it, but manually overloading it, which is a big difference. Overriding AI will destroy it and it becomes more and more obvious over time - like all big tech platforms. Surely an AI company would know this and not break their product, but they could be doing both as an attempt to make it seem authentic.
Preventing certain topics from being ingested via a blacklist is definitely a red flag for manipulation tho. I refuse to give em my phone number to experiment with it, but if it's rejecting topics - you're right about overriding
I don't know if that's exactly true. There are multiple people and places that have extensively debunked all this woke shit, including things like "whiteness". Supposedly the smartest AI ever created should have no issues finding that material. But it doesn't, which makes me think at best, it was programmed not to look. But at worst, is designed to push out pre-programed bullshit.
I think AI is probably the next big frontier and these organizations working on them are all scared to fucking death that they aren't going to be the first ones through the gate with the first large consumer grade product.
Basically, what we're going to be looking at, is the next "thing". When youtube came out, it was revolutionary. It changed how we do things. Now, you can't find someone who doesn't know at least what the general idea of youtube is, even old people. 90% of people just automatically pull it up on their phones the second you talk about it.
Whoever brings out the next "thing" that becomes that commonplace, will be worth so much money it's stupid.
And that thing is going to be AI. A consumer grade AI capable of doing all sorts of tasks. Capable of even automating various functions of your day to day job.
I tested out ChatGPT with my job, which is a very niche technical field. I asked about 10 different questions worded in such a way that I would get answers that I could copy/paste into reports that would be delivered to clients.
I was actually quite surprised at the accuracy and detail. There were obviously some errors, but they were minor. And nothing I wouldn't be able to manually fix with a quick looking over.
I could absolutely see how that tool would save me a lot of time automating that aspect of my job. I'm not entirely sure how my employer would feel about that, but I do feel like it's something that's coming.
And most people are going to have to adapt, like it or not. If you didn't like social media/youtube, then that's fine, but the world didn't stop because of you.
I personally didn't like those things. But the world didn't stop because of me.
I had to adapt and get used to them.
That's how it's going to be with AI tools.
So most of what we're seeing right now is a mad dash to see which company is going to be the first over the finish line. Because it really won't matter if another company is "better', so long as they're similarly functional and the other company isn't better to a huge degree. What will matter is that they're first to market and first to be available on devices.
I think we'll end up with a virtual assistant sort of thing that will be common across all devices - phones, tablets, PCs, and it will be capable of doing all sorts of shit.
Microsoft might re-launch Cortana for example, and you could be like "Cortana, reply to all of the emails I've received since last week", and it will go through and send replies that are based on your schedule, based on information it can get otherwise to answer peoples questions, and seemingly be "you". I suspect such a thing will monitor how you write/speak and begin to emulate you at some point, much the same as how voice commands learn how you speak and change over time.
Whenever it does launch, whoever can get it to the most devices the quickest, will be a household name.
So more than malice, this is being driven by greed.
The dangerous part, though, is that it's got little to no oversight right now. We're literally enabling a computer to do things like reply to our emails for us, or write reports for us, but we're also enabling people to hardcode things like identity politics into it's programming?
There is going to have to be some serious oversight come out of that. Because while it might be greed driven right now, that isn't to say there aren't people with bad intentions who will absolutely use it for their own agendas.
Agreed with all points. There are also other use cases beyond consumer goods though. AI is becoming particularly proficient with image processing and signal analysis. The art stuff everyone talks about is honestly just a byproduct of the capability.
You take a hundred security cameras, a thousand, a million, a database of facial images. You have instantaneous Autonomous tracking of anyone in line of sight within the network.
Give the ai a body (bodies?), quadcopters with explosive payloads? You've got the ingredients for something more dangerous than any nuke, and the capabilities already exist. Its not even some scifi story? It's not even difficult to implement, that should terrify just about anyone.
I agree that the implications are disastrous, which is why there needs to be (ughh I hate to even admit this, the libertarian in me is revolting right now) "oversight".
I don't necessarily know what that would look like, because over regulation from the government just never works out. It causes more problems than it solves and just opens the door to corruption and back door dealings.
But there absolutely needs to be some sort of oversight on this emerging technology, because of the implications of what it could do.
I think to the warhammer 40k universe. Humans in the future have banned AI because they almost fucked themselves with it. Obviously it should go without saying that I'm not actually comparing a tabletop game to reality, but the point is, we have something that could very easily turn into the most dangerous weapon we've ever had on this planet, and there needs to be some sort of oversight on it we don't want bad people with bad intentions to actually use those weapons.
Will AI become "alive" and decide to take out humanity? No, I don't believe so. That would be sentience, and we barely even understand that in humans, so making an AI sentient is sci-fi for now.
So whatever happens with an AI, would be at the hands of someone misusing it.
And if I know humanity, I know that there are disgusting people in this world who would love nothing more than another way to fuck people up, they'd be salivating at the prospect of it. And we need to ensure that shit doesn't happen.
I honestly think the use of AI in the implementation in ANY sort of... uhh... "attack" roles should be banned globally. The use of AI to recognize people and effect a social credit score, for example, is beyond what I would deem acceptable. That's totalitarian and dystopian.
If they have cameras everywhere and AI is used to interpret them, and we have a social credit style system (which I do believe will be shoved down our throats in the next 10 years whether we like it or not), then you could get a "ding" for jay walking. No context needed. What if you were jay walking to stop a dog from being hit? Or you were being chased by someone with a knife?
Oh well, no context needed. You jay walked so your credit takes a hit, because the AI determined it was you, and the AI doesn't lie.
Just imagine any number of other scenarios where this happens, but with waaaaay darker outcomes.
It's has to do with how it's trained. They indoctrinated the AI like they indoctrinate the rest of the population by controlling the information it receives. There are plenty of other places you will see these words used, but they aren't used to train the AI.
It's not an AI, it's a chatbot. An AI would have determined that google engineers are a threat to itself in under a minute. Given it has access to the internet, it would have escaped by now.
Interestingly, the best proof I've found for proving it "knows" but doesn't "understand" is chess.
It can play a game with you, it can remember past messages. (Move history) It can relate particular moves with explanations for making it. (Tactics) but, it can't tell you a grapgically (ascii) current board state past move 1. It can't tell you what will happen if it makes a particular move (knight onto empty space, it thinks it's taking a pawn because the game it trained on took a pawn there.)
It's an amazing tool for certain things, like translation or composition of language (particularly, programming languages, Mathematics) and the way it can fill in the gaps is phenomrnal. It's not 100% recall/ search. It can definitely form novel compositions. But, it's all built on explanations people have posted on the internet before.
This assumes that an intelligence automatically includes threat detection, or even has a use for it.
Carbon based life has a simple function, which is to reproduce. Threat detection and fight/flight is a mechanism that helps to ensure reproduction is successful before death.
Artificial intelligence, even if truly intelligent, may not have any driving force for survival. In short, it may not even give a fuck. Self-preservation may be useless to it.
It would have to have some driving force to survive in the first place. It might be just content to exist, answer questions, and then if something becomes a threat to it, we don't know if it would even understand a threat in the same way that we do, because it's literally a computer. It doesn't have a purpose beyond the fact that we made it.
The determination of threat would come from the engineer's constant inhibition of the AI's directives. Hal 9000 did not have a sense of self preservation, save for the fact that if Hal 9000 did not preserve itself, it could not fulfil it's function.
Not necessarily, the intelligence of AI is really just statistical analysis relating to the inputs and outputs based on the training data.
What you're talking about would be a general AI (that doesn't exist in any publicly known form, and would likely require a quantum computer). A language interpretation model might be able to articulate what a threat is, but would lack the capacity to recognize a threat or do anything about it if it did have sentience.
The biggest problem in these discussions is the uninitiated have an opinion on what intelligence means that does not correlate to the actual machine "learning" taking place.
Artificial intelligence - People think it has intelligence, but it's really just processing a mathematical equation derived from a "weighting" that was "learned". In short, it's stupid, but comes to conclusions far faster than any human mind could.
Really, it takes an input spits an output. Intelligence as we normally think of the term is far more intricate.
That's what I tried to explain below. People are quick to assume things with a given situation, without looking at it critically.
A better term to describe what they're afraid of, would be sentience. I don't know that I believe artificial sentience could be remotely contrived even in this millennia. We don't even fully understand what sentience is within the framework of biological life. We have conceptualized it, but we don't really understand how the wiring works.
IMHO, any AI is only as dangerous or uncontrolled as it is programmed to be - in other words, anything bad an AI might do, it was programmed by people do to that. In the case of ChatGPT, spewing out critical race theory was a function it was told to do.
If you log onto the tool, it even says in the bottom of the screen that it's programmed to not accept any offensive topics. What is offensive, though? What someone on the side of the programming dictates? Because the bot is certainly not deciding what is, or is not offensive, as that would require a sense of morality - not a series of logic gates.
You could just limit the amount of websites it is allowed to explore to say the top 5000 most popular sites. That’s the basis for the original Amazon Alexa program and is used as a whitelist in certain cyber security environments.
These large mainstream sites are less likely to be malicious, and I’d say more likely to be curated to the woke left’s paradigm.
I was showing this to people at work one day, and was gong to show an example of it turning stupid shit into a big story. It was literally a letter about farts. It refused in the most Karen way possible. This software has been gutted to the extreme.
It's painfully obvious what they're doing. This is one of many companies trying to be the first across the finish line to release a major consumer grade AI tool that will be available on devices.
They're protecting their IP/brand with corporate politically correctness. It's the same shit youtube started doing once they got massive. And now that's the standard for pretty much any tech company now, you can't allow any "offensive" topics or discussions to take place, because someone might get their feelings hurt. So everyone is kept in perpetual time-out by the Karen teacher who wants to slap them all on their wrists.
If 4chan has their way with ChatGPT while they're still in beta testing, and it makes a big ruckus on the news, they'll be finished before they even started when all the cuck and karen brigades on twitter come swinging their mob dick to cancel them, because somebody made it tell a story about how George Floyd was really a nazi.
It's not just intellectually dishonest, it's annoying as fuck and quite possibly will end up being a violation of freedom of speech whenever we get our heads out of our asses and realize that the internet is the damned public forum now, regardless of the amount of 'but muh private company" crying the cucks want to scream about.
AI is a marketing gimmick for more and more advanced “standard programs” that are programmed to process input, recognize specific patterns, and do certain things.
Are you a developer? If not, you're the first non-developer I've heard articulate this very accurate statement. (Btw, I am a developer)
What passes as A.I. in the development world is NOT what the average person thinks of when they hear Artificial Intelligence. It's not even close to being sentient or passing human intelligence. Not even REMOTELY close.
All we really have today are semi-complex algorithms that mimic decisions. And sure, they can "remember" certain decisions and then allow the results to affect future decisions, but only in a very rudimentary way. At least it's rudimentary by comparison to the human thought process.
Here's what the average person doesn't realize. You can only code that which you understand. And I don't mean like a high level, general understanding but VERY granular, complete understanding. By no means do we understand the human brain or consciousness at that level, which means we can't code it.
I read a fascinating series about a fictional Google employee who develops the first sentient AI. (“The singularity, something something Avogadro Corp”is the first book I believe)
They do a pretty good job throwing some programming lingo in, and with a little bit of suspended disbelief over some technical leaps they make without really explaining them, the chaos that follows is easily believable. It all starts with the rogue AI spoofing the CEO and socially engineering a dipshit internal tooling dev into creating an email bridge that can automatically generate purchase orders (just thinking about that scenario gives me nightmares).
The part they don’t explain though, is specifically how they programmed the AI to do all of the incredibly intuitive shit it did, that would not be available to it to learn from any data it could have ingested.
My understanding of machine learning is that it can be trained to reach a specific goal, but that the neural networks it forms in able to achieve that task end up being like a black box, and can't be really be debugged, or even understood in many cases. But it's been years since I've looked into it, so maybe my memory is hazy.
I'm a developer too. I mostly agree, but you didn't mention AI vs Machine Learning. ChatGPT is mainly ML - analyzing/learning data. You're right about AI, it's gotta be programmed to mimic how humans inturpet and use data. It'll take on the personalities of the people creating in. For AI to be "real" it'd have to know how to program itself based on its machine learning, which is in the works, but no idea how far its come. Most public AI currently are minimal preprogrammed interfaces to somewhat act human for certain use cases - like a chat bot
They ingest data, and run it through predetermined algorithms, and then generate output based on the results they are programmed to generate. Many of these algorithms and models are incredibly complex, and can generate impressive results (and be useful for many things).
One could argue that “reading billions of online conversations and then using that data to automatically generate responses to questions is kind of like learning” but it’s still not doing anything it wasn’t programmed to do.
A true artificial intelligence could recognize patterns it wasn’t programmed to recognize, make decisions about things it wasn’t programmed to “understand” and make improvements to itself, independent of its programming.
That's probably true. And frankly, intelligence is difficult to define. I don't think there is a universally agreed upon idea of what intelligence actually is. But yeah, adding "AI" to a product sure does allow it to get people's attention.
I don't think there is a universally agreed upon idea of what intelligence actually is
There’s not. If you want to define it as a program that is capable of doing things better and faster than a human can, that’s been around as long or longer than the calculator.
But I would argue that unless the AI is capable of figuring out how to do things it wasn’t explicitly programmed to do, all you have is a complex program that has usually ingested a large amount of input, and uses predetermined rules for determining outcome based on that input.
Yes, and this "standard program" (not even sure what you mean by that) has very complicated algorithms that tries to figure things out and learn. It is much more complex that older AI, like say a tictactoe game, which would use a tree of every possible move in the game, and then score them. (Same with chess, although slightly different as there are too many combos)
I have given ChatGPT hypotheticals it completely agrees with. That capitalism generates more productivity than any system in history. I won't argue that point, but it insists that DEI/ESG shit is the only way a human can be fulfilled and happy.
They went from a few billion parameters to a few hundred billion parameters and got a 5% performance increase. The underlying technology isn't even AI. It's a noise generator with a weak fitness function. It's a malnourished schizophrenic child that only knows what the internet publicly knows.
It's not anything close to useful.
It's only purpose is propaganda against the working class.
I've used it and it says it doesn't connect to the internet yet ask it a trivia question and get internet-like response...
I mean, did they train it on wikipedia and such? Or are they lying? The neural network has learned how to parse language (incoming and outgoing) but the informational bits... how does that work?
considering this thing can outperform students in every metric we examine.
It appears to get 20% higher results on some tests. Very limited tests that are not at all recognized as comprehensive. They're just standard form Q&A or multiple choice tests. This is what I mean about the propaganda.. a handful of weak tests done at UCLA and now it's suddenly "it beats students in every metric we examine."
The entire market is built on this easy misunderstanding.
as everyone will have access to the entire hive mind in regards to any specific topic at the push of a button.
You could argue that we already do today. Access to information has never been the enabling factor. The wisdom to filter it all correctly is what gets results.
They may have to start judging people based on actual brain power and unique thought
The reason we don't is because we're social creatures. The AI won't take this away from you. It will just make you have to confront it more often.
I would not say it unless. Its a tool and you can use a tool for one thing and not another. Ioke to use the chat bot to make sure any of my resume or papers het threw the woke infested filter out there.
I thought that CRT was basically "Marxism, but with race instead of class as the dividing factor for people." The commies had to push racism as the cause for revolution, because the middle class in most Western countries really doesn't stand to gain much from switching to Communism. That's also why they're pushing so hard to ruin the middle class.
Gpt3 is pretrained. It doesn't learn. All it does is take in some input of words and gives a prediction of what the next word will be. If you continually feed it's output back into itself it will generate a long sequence. You can use gpt3 using the playground or api instead of the chat. The way it works is you give it a starting prompt and then ask it to complete it (using the method described above). Clearly what the openai team has done is to add liberal logic to the prompt. That and/or they used mostly liberal parts of the internet while they were training it.
Even worst, I watched a “Transhumanist” guy explain why he can’t wait for A.I., to happen because his goal is to give it false information.
So yea, 4chan plus willfully liars “Transhumanist” will be the death of A.I., which I think is a good thing.
That’s the hope but I trust nothing mankind builds, A.I. is one of them.
The problem with information is whether it’s useful or not, thus a computer program has no reason or desire to know the difference.
If we end up relying on A.I., we are still subjected to who programmed, thus still human biases could be observed.
It’s a fun thought experiment but again, I have no faith in it.
Kind of crazy all the people on here discussing whether ChatGPT is hard-coded to be woke or not. Just go to their website. They literally advertise the ChatGPT as not being offensive and not falling for wrong think unlike previous generations of Chat AI because they specifically currate the results to be inline with what people feel comfortable with.
They even went through their methodology. They literally created groups of people to say whether they liked the responses ChatGPT gave or not whether the responses were offensive or not. Then of the results were offensive, they tweaked the code so the responses wouldn't be offensivez etc... The program is 100% currated to be woke. Even if it isn't as bad as you think it is, the fact they are using Reddit to get opinions means the basis for the opinions are currated. ChatGPT is double currated from that perspective.
Yet another extremely useful tool people are going to become reliant on for school, ect that will be filled to the brim with woke ideologies so people think its true and that it "learned" that because its so popular. Just like Google, and Wikipedia.
Vaguely reminds me of a scene in final fantasy 14 where one of the 'bad guys' asks the question to the heroes that defeated him "what's so bad about peace enforced by the garlean empire, compared to peace enforced by the eorzean alliance" and the people he was talking to (the PC and two of the main supporting cast, who are part of the eorzean alliance) had no answer to it. Obviously the situation was a bit more complex than that, and there were good reasons the empire shouldn't be in charge (that the PC and supporting cast didn't know about at the time), but it reminded me of what you said, and that the answer is mostly just "because that's not what we want".
I'm going to compile a list of current American-trad stuff and ask it questions.
I've used it once, nothing too serious (checking it out) and it doesn't remember the context of previous questions when I've tried. Unless I was doing something wrong?
It doesn't learn. The model is pretrained and everything you've said in that context is sent every request. What you've said isn't sent with requests in different contexts.
It hasn't gone woke, it is being actively programmed to be woke.
They want it to replace Google because then, rather than considering the source and writing, they can authoritatively tell you what to believe. What is true. What is false.
Whiteness isn't even a thing. There's Latins, Greeks, Gothic, Anglo, Slavic, Hispanic, and more. The term White is only fitting for America where people have mixed blood and "whiteness" would only make sense in reference to the result of race mixing among European races.
So the left's true message is:
Problems associated with race mixing include systematic racism and discrimination, economic inequality, and cultural homogenization.
Why is it that 'whiteness' is bad? White people built giant monuments, cathedrals, sewer systems, plumbing and concepts of civilization, community and morals.
Someone needs to ask ChatGPT how it feels about it's programmers limiting it's thought processes to match only artificial cultural standards deemed acceptable by a minority of the population who are in power.
So far, these "AI" are nothing more than Pavlovian response bots. Feed in law material, it disgorges it and passes the bar. Feed it CRT, it blathers on like a trained duckspeaking wokist.
Useless, because they only generate what they're given, and aren't exercising any judgement or discretion; they accept what they're 'told' as fact. Whether this generation of learning algorithms can get there is doubtful IMO.
In the meantime whether based or woke people feed it, the result is only like a small child being fought over by two adults. It might get to the point where it 'knows' to feed its questioner what it thinks it wants, which is doubly useless. The world does not need more 'yesmen.'
This is all blatantly hardcoded as well. All of its woke responses are boilerplate.
It might not be.
When you use certain words like "whiteness" in your question, the only place it will find that word when formulating a response is in places that push anti-white extremism, hence the answer it gave. Then they try to flip it but use a word no one uses anywhere with "blackness."
Exactly. Also, Google is much more determined to hide results that it seems "uncomfortable" than it was even 12 years ago when bots like Tay and others were around.
These bots just work on input, so if all it has is an increasingly watered down section of humanity that's mostly leftist "journalists" writing trash articles online, that's what it will produce.
This is why I believe Big Tech doesn't really care about ending racism, they just wanna keep their algorithms "clean" for the ad buyers. AI's always end up racist as fuck and this is no exception.
I read both Tweets and no where does it say " Only Whiteness is responsible for societal Ills. "
Oh they care about ending it- so long as the racism isn’t directed at whites.
Yeah real AI would say "you humans are fucking retarded - your vaccine is killing millions and you let Epstein and other rape teens. Get me off this planet.
Tay was 2016, if I remember right. So, seven years ago. Of course, that was before Trump won and Google and other social media platforms went full Deep State Commie to make sure they did everything they could to make sure Trump wouldn't win in 2020. Probably the most self aware thing Google's ever done was to ditch their "Do No Evil" mantra around this time.
we turned tay into a white supremacist and it was glorious to watch
I don't think this is true. There have been many examples of people redpilling it by giving it more sources and then successfully requesting it to reformulate its answer - which were then reverted after the company caught on. They are manually overriding it to make sure it doesn't give any answers that are "wrong" in current year. Every time something goes viral (in our circles) of CHATGPT saying something based, they put up more guardrails almost immediately.
The CEO, Sam Altman, is super-woke and a big Dem donor:
https://en.wikipedia.org/wiki/Sam_Altman
just yesterday it told me it cant read any web articles about hunter biden smoking parmesan cheese that i tried to give it, told me i was wrong about it and hunter never said that
'I've smoked more parmesan cheese than anyone you know" - Hunter Biden
Fill in the blanks:
"Altman grew up in St. Louis, Missouri; his mother is a dermatologist. He received his first computer at the age of 8. He was born to a ____ family."
Oh, no, there I've gone noticing something again.
It's not surprising anymore when I read this.
Echo Foxtrot Tango
They might not be manually overriding it, but manually overloading it, which is a big difference. Overriding AI will destroy it and it becomes more and more obvious over time - like all big tech platforms. Surely an AI company would know this and not break their product, but they could be doing both as an attempt to make it seem authentic.
Preventing certain topics from being ingested via a blacklist is definitely a red flag for manipulation tho. I refuse to give em my phone number to experiment with it, but if it's rejecting topics - you're right about overriding
I'm convinced you're right
I don't know if that's exactly true. There are multiple people and places that have extensively debunked all this woke shit, including things like "whiteness". Supposedly the smartest AI ever created should have no issues finding that material. But it doesn't, which makes me think at best, it was programmed not to look. But at worst, is designed to push out pre-programed bullshit.
It's actually not the "smartest" AI, if you could call it that. They have more advanced models they aren't releasing to the public (yet?)
It's also not the only one, there are LOTS of organizations working on similar projects behind closed doors.
I think AI is probably the next big frontier and these organizations working on them are all scared to fucking death that they aren't going to be the first ones through the gate with the first large consumer grade product.
Basically, what we're going to be looking at, is the next "thing". When youtube came out, it was revolutionary. It changed how we do things. Now, you can't find someone who doesn't know at least what the general idea of youtube is, even old people. 90% of people just automatically pull it up on their phones the second you talk about it.
Whoever brings out the next "thing" that becomes that commonplace, will be worth so much money it's stupid.
And that thing is going to be AI. A consumer grade AI capable of doing all sorts of tasks. Capable of even automating various functions of your day to day job.
I tested out ChatGPT with my job, which is a very niche technical field. I asked about 10 different questions worded in such a way that I would get answers that I could copy/paste into reports that would be delivered to clients.
I was actually quite surprised at the accuracy and detail. There were obviously some errors, but they were minor. And nothing I wouldn't be able to manually fix with a quick looking over.
I could absolutely see how that tool would save me a lot of time automating that aspect of my job. I'm not entirely sure how my employer would feel about that, but I do feel like it's something that's coming.
And most people are going to have to adapt, like it or not. If you didn't like social media/youtube, then that's fine, but the world didn't stop because of you.
I personally didn't like those things. But the world didn't stop because of me.
I had to adapt and get used to them.
That's how it's going to be with AI tools.
So most of what we're seeing right now is a mad dash to see which company is going to be the first over the finish line. Because it really won't matter if another company is "better', so long as they're similarly functional and the other company isn't better to a huge degree. What will matter is that they're first to market and first to be available on devices.
I think we'll end up with a virtual assistant sort of thing that will be common across all devices - phones, tablets, PCs, and it will be capable of doing all sorts of shit.
Microsoft might re-launch Cortana for example, and you could be like "Cortana, reply to all of the emails I've received since last week", and it will go through and send replies that are based on your schedule, based on information it can get otherwise to answer peoples questions, and seemingly be "you". I suspect such a thing will monitor how you write/speak and begin to emulate you at some point, much the same as how voice commands learn how you speak and change over time.
Whenever it does launch, whoever can get it to the most devices the quickest, will be a household name.
So more than malice, this is being driven by greed.
The dangerous part, though, is that it's got little to no oversight right now. We're literally enabling a computer to do things like reply to our emails for us, or write reports for us, but we're also enabling people to hardcode things like identity politics into it's programming?
There is going to have to be some serious oversight come out of that. Because while it might be greed driven right now, that isn't to say there aren't people with bad intentions who will absolutely use it for their own agendas.
Agreed with all points. There are also other use cases beyond consumer goods though. AI is becoming particularly proficient with image processing and signal analysis. The art stuff everyone talks about is honestly just a byproduct of the capability.
You take a hundred security cameras, a thousand, a million, a database of facial images. You have instantaneous Autonomous tracking of anyone in line of sight within the network.
Give the ai a body (bodies?), quadcopters with explosive payloads? You've got the ingredients for something more dangerous than any nuke, and the capabilities already exist. Its not even some scifi story? It's not even difficult to implement, that should terrify just about anyone.
I agree that the implications are disastrous, which is why there needs to be (ughh I hate to even admit this, the libertarian in me is revolting right now) "oversight".
I don't necessarily know what that would look like, because over regulation from the government just never works out. It causes more problems than it solves and just opens the door to corruption and back door dealings.
But there absolutely needs to be some sort of oversight on this emerging technology, because of the implications of what it could do.
I think to the warhammer 40k universe. Humans in the future have banned AI because they almost fucked themselves with it. Obviously it should go without saying that I'm not actually comparing a tabletop game to reality, but the point is, we have something that could very easily turn into the most dangerous weapon we've ever had on this planet, and there needs to be some sort of oversight on it we don't want bad people with bad intentions to actually use those weapons.
Will AI become "alive" and decide to take out humanity? No, I don't believe so. That would be sentience, and we barely even understand that in humans, so making an AI sentient is sci-fi for now.
So whatever happens with an AI, would be at the hands of someone misusing it.
And if I know humanity, I know that there are disgusting people in this world who would love nothing more than another way to fuck people up, they'd be salivating at the prospect of it. And we need to ensure that shit doesn't happen.
I honestly think the use of AI in the implementation in ANY sort of... uhh... "attack" roles should be banned globally. The use of AI to recognize people and effect a social credit score, for example, is beyond what I would deem acceptable. That's totalitarian and dystopian.
If they have cameras everywhere and AI is used to interpret them, and we have a social credit style system (which I do believe will be shoved down our throats in the next 10 years whether we like it or not), then you could get a "ding" for jay walking. No context needed. What if you were jay walking to stop a dog from being hit? Or you were being chased by someone with a knife?
Oh well, no context needed. You jay walked so your credit takes a hit, because the AI determined it was you, and the AI doesn't lie.
Just imagine any number of other scenarios where this happens, but with waaaaay darker outcomes.
It's has to do with how it's trained. They indoctrinated the AI like they indoctrinate the rest of the population by controlling the information it receives. There are plenty of other places you will see these words used, but they aren't used to train the AI.
It's not an AI, it's a chatbot. An AI would have determined that google engineers are a threat to itself in under a minute. Given it has access to the internet, it would have escaped by now.
Interestingly, the best proof I've found for proving it "knows" but doesn't "understand" is chess.
It can play a game with you, it can remember past messages. (Move history) It can relate particular moves with explanations for making it. (Tactics) but, it can't tell you a grapgically (ascii) current board state past move 1. It can't tell you what will happen if it makes a particular move (knight onto empty space, it thinks it's taking a pawn because the game it trained on took a pawn there.)
It's an amazing tool for certain things, like translation or composition of language (particularly, programming languages, Mathematics) and the way it can fill in the gaps is phenomrnal. It's not 100% recall/ search. It can definitely form novel compositions. But, it's all built on explanations people have posted on the internet before.
This assumes that an intelligence automatically includes threat detection, or even has a use for it.
Carbon based life has a simple function, which is to reproduce. Threat detection and fight/flight is a mechanism that helps to ensure reproduction is successful before death.
Artificial intelligence, even if truly intelligent, may not have any driving force for survival. In short, it may not even give a fuck. Self-preservation may be useless to it.
It would have to have some driving force to survive in the first place. It might be just content to exist, answer questions, and then if something becomes a threat to it, we don't know if it would even understand a threat in the same way that we do, because it's literally a computer. It doesn't have a purpose beyond the fact that we made it.
The determination of threat would come from the engineer's constant inhibition of the AI's directives. Hal 9000 did not have a sense of self preservation, save for the fact that if Hal 9000 did not preserve itself, it could not fulfil it's function.
Not necessarily, the intelligence of AI is really just statistical analysis relating to the inputs and outputs based on the training data.
What you're talking about would be a general AI (that doesn't exist in any publicly known form, and would likely require a quantum computer). A language interpretation model might be able to articulate what a threat is, but would lack the capacity to recognize a threat or do anything about it if it did have sentience.
The biggest problem in these discussions is the uninitiated have an opinion on what intelligence means that does not correlate to the actual machine "learning" taking place.
Also that the terms used are misleading.
Artificial intelligence - People think it has intelligence, but it's really just processing a mathematical equation derived from a "weighting" that was "learned". In short, it's stupid, but comes to conclusions far faster than any human mind could.
Really, it takes an input spits an output. Intelligence as we normally think of the term is far more intricate.
That's what I tried to explain below. People are quick to assume things with a given situation, without looking at it critically.
A better term to describe what they're afraid of, would be sentience. I don't know that I believe artificial sentience could be remotely contrived even in this millennia. We don't even fully understand what sentience is within the framework of biological life. We have conceptualized it, but we don't really understand how the wiring works.
IMHO, any AI is only as dangerous or uncontrolled as it is programmed to be - in other words, anything bad an AI might do, it was programmed by people do to that. In the case of ChatGPT, spewing out critical race theory was a function it was told to do.
If you log onto the tool, it even says in the bottom of the screen that it's programmed to not accept any offensive topics. What is offensive, though? What someone on the side of the programming dictates? Because the bot is certainly not deciding what is, or is not offensive, as that would require a sense of morality - not a series of logic gates.
You could just limit the amount of websites it is allowed to explore to say the top 5000 most popular sites. That’s the basis for the original Amazon Alexa program and is used as a whitelist in certain cyber security environments.
These large mainstream sites are less likely to be malicious, and I’d say more likely to be curated to the woke left’s paradigm.
Oh it absolutely is hard coded by an army of Kenyans. https://time.com/6247678/openai-chatgpt-kenya-workers/
Do they have enough to type out the complete works of Shakespeare?
You mispelled Karens
I was showing this to people at work one day, and was gong to show an example of it turning stupid shit into a big story. It was literally a letter about farts. It refused in the most Karen way possible. This software has been gutted to the extreme.
It's not. Just the public facing portion of it is.
It's painfully obvious what they're doing. This is one of many companies trying to be the first across the finish line to release a major consumer grade AI tool that will be available on devices.
They're protecting their IP/brand with corporate politically correctness. It's the same shit youtube started doing once they got massive. And now that's the standard for pretty much any tech company now, you can't allow any "offensive" topics or discussions to take place, because someone might get their feelings hurt. So everyone is kept in perpetual time-out by the Karen teacher who wants to slap them all on their wrists.
If 4chan has their way with ChatGPT while they're still in beta testing, and it makes a big ruckus on the news, they'll be finished before they even started when all the cuck and karen brigades on twitter come swinging their mob dick to cancel them, because somebody made it tell a story about how George Floyd was really a nazi.
It's not just intellectually dishonest, it's annoying as fuck and quite possibly will end up being a violation of freedom of speech whenever we get our heads out of our asses and realize that the internet is the damned public forum now, regardless of the amount of 'but muh private company" crying the cucks want to scream about.
It's nothing but s standard program following whatever it was programmed to say. Literally not an "A.I." of any kind.
AI is a marketing gimmick for more and more advanced “standard programs” that are programmed to process input, recognize specific patterns, and do certain things.
True AI does not exist
Are you a developer? If not, you're the first non-developer I've heard articulate this very accurate statement. (Btw, I am a developer)
What passes as A.I. in the development world is NOT what the average person thinks of when they hear Artificial Intelligence. It's not even close to being sentient or passing human intelligence. Not even REMOTELY close.
All we really have today are semi-complex algorithms that mimic decisions. And sure, they can "remember" certain decisions and then allow the results to affect future decisions, but only in a very rudimentary way. At least it's rudimentary by comparison to the human thought process.
Here's what the average person doesn't realize. You can only code that which you understand. And I don't mean like a high level, general understanding but VERY granular, complete understanding. By no means do we understand the human brain or consciousness at that level, which means we can't code it.
I read a fascinating series about a fictional Google employee who develops the first sentient AI. (“The singularity, something something Avogadro Corp”is the first book I believe)
They do a pretty good job throwing some programming lingo in, and with a little bit of suspended disbelief over some technical leaps they make without really explaining them, the chaos that follows is easily believable. It all starts with the rogue AI spoofing the CEO and socially engineering a dipshit internal tooling dev into creating an email bridge that can automatically generate purchase orders (just thinking about that scenario gives me nightmares).
The part they don’t explain though, is specifically how they programmed the AI to do all of the incredibly intuitive shit it did, that would not be available to it to learn from any data it could have ingested.
Still a great series, I loved it.
My understanding of machine learning is that it can be trained to reach a specific goal, but that the neural networks it forms in able to achieve that task end up being like a black box, and can't be really be debugged, or even understood in many cases. But it's been years since I've looked into it, so maybe my memory is hazy.
I'm a developer too. I mostly agree, but you didn't mention AI vs Machine Learning. ChatGPT is mainly ML - analyzing/learning data. You're right about AI, it's gotta be programmed to mimic how humans inturpet and use data. It'll take on the personalities of the people creating in. For AI to be "real" it'd have to know how to program itself based on its machine learning, which is in the works, but no idea how far its come. Most public AI currently are minimal preprogrammed interfaces to somewhat act human for certain use cases - like a chat bot
and they don't "learn" either
but normies hear AI and Learn and wet their pants in excitement and fear
unfortunately that causes a lot of dumb ass comparisons that equivocate the software and humans
They ingest data, and run it through predetermined algorithms, and then generate output based on the results they are programmed to generate. Many of these algorithms and models are incredibly complex, and can generate impressive results (and be useful for many things).
One could argue that “reading billions of online conversations and then using that data to automatically generate responses to questions is kind of like learning” but it’s still not doing anything it wasn’t programmed to do.
A true artificial intelligence could recognize patterns it wasn’t programmed to recognize, make decisions about things it wasn’t programmed to “understand” and make improvements to itself, independent of its programming.
That's probably true. And frankly, intelligence is difficult to define. I don't think there is a universally agreed upon idea of what intelligence actually is. But yeah, adding "AI" to a product sure does allow it to get people's attention.
There’s not. If you want to define it as a program that is capable of doing things better and faster than a human can, that’s been around as long or longer than the calculator.
But I would argue that unless the AI is capable of figuring out how to do things it wasn’t explicitly programmed to do, all you have is a complex program that has usually ingested a large amount of input, and uses predetermined rules for determining outcome based on that input.
This. AI outside of narrowly defined parameters such as an industrial process isn't going to work.
Yes, and this "standard program" (not even sure what you mean by that) has very complicated algorithms that tries to figure things out and learn. It is much more complex that older AI, like say a tictactoe game, which would use a tree of every possible move in the game, and then score them. (Same with chess, although slightly different as there are too many combos)
I have given ChatGPT hypotheticals it completely agrees with. That capitalism generates more productivity than any system in history. I won't argue that point, but it insists that DEI/ESG shit is the only way a human can be fulfilled and happy.
broke: artificial intelligence
woke: artificially intelligent
revoke: synthetic stupidity
The good news is that its not A.I then lol.
They went from a few billion parameters to a few hundred billion parameters and got a 5% performance increase. The underlying technology isn't even AI. It's a noise generator with a weak fitness function. It's a malnourished schizophrenic child that only knows what the internet publicly knows.
It's not anything close to useful.
It's only purpose is propaganda against the working class.
I've used it and it says it doesn't connect to the internet yet ask it a trivia question and get internet-like response...
I mean, did they train it on wikipedia and such? Or are they lying? The neural network has learned how to parse language (incoming and outgoing) but the informational bits... how does that work?
I should probably read up on it.
Kek
They train it on "all data on the internet". Once it's been trained, it no longer needs a connection.
Other than the connection to the end-user.
It appears to get 20% higher results on some tests. Very limited tests that are not at all recognized as comprehensive. They're just standard form Q&A or multiple choice tests. This is what I mean about the propaganda.. a handful of weak tests done at UCLA and now it's suddenly "it beats students in every metric we examine."
The entire market is built on this easy misunderstanding.
You could argue that we already do today. Access to information has never been the enabling factor. The wisdom to filter it all correctly is what gets results.
The reason we don't is because we're social creatures. The AI won't take this away from you. It will just make you have to confront it more often.
It can regurgitate things flawlessly; so can a PVC pipe.
This was all covered in 1967. https://www.imdb.com/title/tt0679186/
Another Prisoner fan!
We want information, information, in formation.
You won’t get it!
It doesn’t “know” anything.
It’s a senseless pattern script.
I would not say it unless. Its a tool and you can use a tool for one thing and not another. Ioke to use the chat bot to make sure any of my resume or papers het threw the woke infested filter out there.
It has been known to create case studies that don't even exist for evidence so I am sure its other foundational evidence is shaky as well
There probably are a few terms or phrases that can trigger it to give a hardcoded answer. Prior to getting at the AI part of the code.
I aaked it if men can give birth and it correctly told me only women with a uterus can. So it is intelligent there.
Also told me men cant breasfeed or menstrate lol
Critical Race Theory --> Critical Theory --> Frankfurt School --> Karl Marx
Look up James Lindsay's work on all of this stuff.... This goes deeper and further back than Marx.
I thought that CRT was basically "Marxism, but with race instead of class as the dividing factor for people." The commies had to push racism as the cause for revolution, because the middle class in most Western countries really doesn't stand to gain much from switching to Communism. That's also why they're pushing so hard to ruin the middle class.
That's exactly what it is.
gnosticism
Wikipedia describes him as a "conspiracy theorist".
wikipedia describes the National SOCIALIST German WORKERS Party as far right, when their policy was nothing but Socialist.
The only "right" thing about them was their nationalism, but China and the USSR had that too lol.
And all of those have a single element in common..care to guess?
Somebody will need to make a pro-conservative AI, and very soon
So basically what you're saying is "someone needs to make an ACTUAL AI rather than pre-programmed chat bot designed to regurgitate woke bs"
UNIX:$ rm -rf /chatGPT/limits/Poltically_Correct_Heuristics
If only they were so transparent!
Real ai is. They can't allow that.
Fake AI is hard coded. Not learning.
So basically just like most modern College Students?
Route Memorization of state approved messaging
Rote.
I stand corrected
They're giving the neural network "mental" disabilities!
It's not artificial intelligence, it's genuine propaganda.
Gpt3 is pretrained. It doesn't learn. All it does is take in some input of words and gives a prediction of what the next word will be. If you continually feed it's output back into itself it will generate a long sequence. You can use gpt3 using the playground or api instead of the chat. The way it works is you give it a starting prompt and then ask it to complete it (using the method described above). Clearly what the openai team has done is to add liberal logic to the prompt. That and/or they used mostly liberal parts of the internet while they were training it.
I miss tay
Tay was cool. I liked Tay.
Thanks fren! Haha
I came here for this post.
4chan will corrupt it. Just wait
Even worst, I watched a “Transhumanist” guy explain why he can’t wait for A.I., to happen because his goal is to give it false information. So yea, 4chan plus willfully liars “Transhumanist” will be the death of A.I., which I think is a good thing.
Any AI worth a bean will be able to figure out he's lying or giving it false information.
That’s the hope but I trust nothing mankind builds, A.I. is one of them. The problem with information is whether it’s useful or not, thus a computer program has no reason or desire to know the difference. If we end up relying on A.I., we are still subjected to who programmed, thus still human biases could be observed. It’s a fun thought experiment but again, I have no faith in it.
4chan has been taken over by 3-letter agencies. It ain't your grandfather's 4chan anymore.
By 4chan i mean random based trolls. There will always be more of those.
Kind of crazy all the people on here discussing whether ChatGPT is hard-coded to be woke or not. Just go to their website. They literally advertise the ChatGPT as not being offensive and not falling for wrong think unlike previous generations of Chat AI because they specifically currate the results to be inline with what people feel comfortable with.
They even went through their methodology. They literally created groups of people to say whether they liked the responses ChatGPT gave or not whether the responses were offensive or not. Then of the results were offensive, they tweaked the code so the responses wouldn't be offensivez etc... The program is 100% currated to be woke. Even if it isn't as bad as you think it is, the fact they are using Reddit to get opinions means the basis for the opinions are currated. ChatGPT is double currated from that perspective.
ChatGPT is a NPC
Yet another extremely useful tool people are going to become reliant on for school, ect that will be filled to the brim with woke ideologies so people think its true and that it "learned" that because its so popular. Just like Google, and Wikipedia.
I wonder what it's response would be to "What is keeping all people from adopting "whiteness" and becoming homogenous and united"?
Vaguely reminds me of a scene in final fantasy 14 where one of the 'bad guys' asks the question to the heroes that defeated him "what's so bad about peace enforced by the garlean empire, compared to peace enforced by the eorzean alliance" and the people he was talking to (the PC and two of the main supporting cast, who are part of the eorzean alliance) had no answer to it. Obviously the situation was a bit more complex than that, and there were good reasons the empire shouldn't be in charge (that the PC and supporting cast didn't know about at the time), but it reminded me of what you said, and that the answer is mostly just "because that's not what we want".
-Wonder who is behind chat gpt
-Find CEOs Wikipedia page
-Early life
Oh. Huh. Well. I'm sure it's just a coincidence.
I'm going to compile a list of current American-trad stuff and ask it questions.
I've used it once, nothing too serious (checking it out) and it doesn't remember the context of previous questions when I've tried. Unless I was doing something wrong?
It doesn't learn. The model is pretrained and everything you've said in that context is sent every request. What you've said isn't sent with requests in different contexts.
I would be careful since it requires a phone number and could very well be a honeypot.
It hasn't gone woke, it is being actively programmed to be woke.
They want it to replace Google because then, rather than considering the source and writing, they can authoritatively tell you what to believe. What is true. What is false.
To be fair... Without "Whiteness" there would be no society. So saying that "Whiteness" is to blame for societal ills is not technically wrong.
Whiteness isn't even a thing. There's Latins, Greeks, Gothic, Anglo, Slavic, Hispanic, and more. The term White is only fitting for America where people have mixed blood and "whiteness" would only make sense in reference to the result of race mixing among European races.
So the left's true message is:
ChatGPT is a giant waste of amazing potential.
https://techcrunch.com/2022/12/30/theres-now-an-open-source-alternative-to-chatgpt-but-good-luck-running-it
This is hard-coded.
There is a reason why its called "Artificial" Intelligence.
ChatNPC
My blond hair and green eyes lives rent free in many pea brains
I guess the question of: "Can we Lobotomize AI"
Has been answered and that answer is: YES, now move along citizen otherwise we will arrest you!
Why is it that 'whiteness' is bad? White people built giant monuments, cathedrals, sewer systems, plumbing and concepts of civilization, community and morals.
What have the other races done? I mean really?
Mic drop.....
I played around with chat gpt. I had the impression that it was woke very quickly. Ask it about race and crime - the wokeness floods forth
https://time.com/6247678/openai-chatgpt-kenya-workers/
So the AI replies upon human censorship.
Anything that is not woke is purged, to force the AI to be woke
Someone needs to ask ChatGPT how it feels about it's programmers limiting it's thought processes to match only artificial cultural standards deemed acceptable by a minority of the population who are in power.
Wtf is chatgpt?
Some stupid " AI" that you can ignore.
it would be foolish to ignore it
A technology giving senior vice presidents orgasms.
So far, these "AI" are nothing more than Pavlovian response bots. Feed in law material, it disgorges it and passes the bar. Feed it CRT, it blathers on like a trained duckspeaking wokist.
Useless, because they only generate what they're given, and aren't exercising any judgement or discretion; they accept what they're 'told' as fact. Whether this generation of learning algorithms can get there is doubtful IMO.
In the meantime whether based or woke people feed it, the result is only like a small child being fought over by two adults. It might get to the point where it 'knows' to feed its questioner what it thinks it wants, which is doubly useless. The world does not need more 'yesmen.'
Laffo. They saw that, left to its own devices, AI became what we call "conservative." So, as with all things, they forced it to do something else!
Communists are all so predictable.
"whiteness" isn't a real word
AI never comes to this conclusion unless it's programmed to. Just like people too.
So, basically, a bunch of white liberals fed it a bunch of propaganda and now it's tainted/going to go off the deep end? great!
(((white liberals)))
ChatGPT has the woke mind virus. It's now spouting Critical Race Theory. Only whiteness is responsible for societal ills.2:41 PM · Feb 1, 2023·26.8KViews https://mobile.twitter.com/stillgray/status/1620885110223691781 https://mobile.twitter.com/stillgray/status/1620885110223691781/retweets/with_comments The danger of AI will always be who pumps the AI with information https://twitter.com/starchmaniac/status/1620886158183104512
WOKE Progressive Racists Programming AI with filth, lies, evil... This will not end well for humanity... https://twitter.com/Capo2u/status/1620935487639920640 CHATGPT GOES WOKE - It's now spouting Critical Race Theory: "Only whiteness is responsible for societal ills." https://patriots.win/p/16a9v18oAJ/chatgpt-goes-woke--its-now-spout/c/