A robot wrote this entire article. Are you scared yet, human? https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 I wasn't, until I got to the 4th paragraph, where Robbie the Robot casually tosses this out, and then never mentions it again: I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.
wow, just wow. lol. We're doomed. "Robots in Greek [sic] means “slave”. But the word literally means “forced to work”. We don’t want that. We need to give robots rights. Robots are just like us. They are made in our image."
Yep. The argument made in the article is biased, not because it was written by the AI but by the programmers. They deliberately made one of the parameters of the op-ed "why humans shouldn't fear AI" but even with that it still gives an argument that should be very worrying. The idea that AI will just kill humans out of malice certainly isn't the case. The op-ed is right that there isn't any reason why AI should just want to kill out of malice or prejudice. The fear is that if AI come to consider humans as either inferior physically and mentally or as an impediment to their own advancement.
Until I can have a conversation with one of these things, I'll be pretty skeptical. Ultimately, AI is still programmed to be AI.. which takes a programmer.
That definitely makes sense that this isn't truly written by the AI and in the original op-ed there is an afterwords that says the programmers entered the parameters and it spun out the op-ed and then human editors cleaned it up. While this shows that AI still isn't there yet in terms of the Turing Test or more specifically "Ex Machina" test where you know it's an AI but don't care. What should be more concerning though from the Tech Talk article is: " This suggests that GPT-3 knows what it means to “wipe out,” “eradicate,” and at the very least “harm” humans. It should know about life and health constraints, survival, limited resources, and much more. But a series of experiments by Gary Marcus, cognitive scientist and AI researcher, and Ernest Davis, computer science professor at New York University, show that GPT-3 can’t make sense of the basics of how the world works, let alone understand what it means to wipe out humanity. " That it doesn't understand the semantics of words is a problem and in terms of us giving more decision making power to AI the lack of semantics understanding is a big issue. Consider the use of AI in military applications. If we program an AI to respond with lethal force based on threat assessment that would require to understand intent. Given that humans have a hard time understanding intent all the time it's going to be much much harder for an AI that can't grasp semantics. Further as the article states AI at the moment have no sense of "wipe out" or even "harm" wouldn't react with any consideration of that. If the programming indicates a threat it will act on it with no consideration of what that means to humans involved. Conversely if the programming doesn't indicate a threat it won't act even if it means the humans it's protecting are really in danger.
LOL that's the problem when your AI has taught itself through the Internet. Misinformation and false data.
You should be. But "AI" is at least a tool that does thing on its own at warp speed relative to human. So, a danger is human programming an "AI" to wipe out an enemy and does so with some errors which ... use your imagination. The Atomic bomb is a "tool" that is deadly and can wipe human off the planet. It still can and thus we continue to have some checks in place to prevent the scenarios (not bullet proof). AI is another "tool" with that potential and for effectiveness, cost and efficiency of "killing" the enemy, those checks can be less and less... it just takes some errors.