Artificial intelligence (AI) is once again at the centre of the scene worldwide, with the recent arrival of the GPT-4 language model, the latest version of ChatGPT. The new tool dazzles with its capacity, but it also poses excellent dildilemmaswhich are also faced by ititeratoror, Sam Altman, current CEO of the OpenAI company.
In a recent interview with the American media ABC News, Altman acknowledged that great chainsal rbelovedo be develotechnologyibringshnology could bring real dangers to humanity .
Related
Chat GPT Best AI Software All Over The Internet
” We must be careful and at the same time understand that it is useless to have everything in a laboratory. It is a product that we must spread and make contact with reality and have errors while the risks are low. Having said that, I think people should be happy that we’re kind of scared of this. If I could say that it doesn’t scare me, you shouldn’t trust me (…) ”, she stated.
Disinformation and cyberattacks, Altman’s main concerns
The OpenAI CEO had previously expressed his fears in a post made on his personal Twitter account. On this occasion, he indicated that his main concern is that artificial intelligence such as ChatGPT could be used to generate content to misinform people.
“I am particularly concerned that these models could be used for large-scale disinformation. Now that they are getting better at writing computer code, they could be used for offensive cyberattacks,” he stated in the interview.

On the other hand, Altman considered that the ability of these tools to write code in various programming languages could create cybersecurity risks. “Now that they have improved their programming capabilities, they could be used to carry out more aggressive cyberattacks,” he stated.
Still, Altman reflected, “This is a tool that is largely under human control .” In this sense, he noted that the GPT-4 “waits for someone to give it an input”, and what is worrying is who has control of those inputs.
Humanity needs time to adapt to Artificial Intelligence
Altman assured that even though all the changes that can be generated thanks to the advance in artificial intelligence technology can be considered positive, humanity needs time to adapt to them. He insisted that OpenAI must also carry out this adaptation process to correct inappropriate use or the harmful consequences that this type of system may have.
“For sure we will make corrections at the same time that these negative events occur. Now that the risks are low, we are learning as much as we can and establishing constant feedback to improve the system to avoid the most dangerous scenarios ”, assured the CEO of the company.

For More Tech Related Updates:
Keep Reading BinarySapiens.
Make Sure To Subscribe To Our Instagram.
Make Sure To Subscribe To Our Facebook.