It seems like all of a sudden the world is talking about Artificial Intelligence (AI) and the pros and cons of the use of technology that can learn from itself. The uses of AI to help humanity seem endless. Greater efficiency, the irradiation of mundane tasks from our lives, and creative and innovative solutions to problems we didn't even know we had. Yet the dangers of AI are no longer only the subject of science fiction. Instead, they now seem real and immediate. Is AI causing us to make the wrong decisions? Is it invading our privacy? Is it convincing ourselves of lies presented as factual truths? Giorgi Gobronidze, the owner of face search company Pimeyes, shares his views on the moral dilemmas of utilizing AI technology, and how we can do it correctly.
For Giorgi Gobronidze, AI looks like more firearms than anything else he can think of, in terms of its ability to revolutionary change the landscape of the field, where the technology is deployed.
"Once the firearm was invented, military service became cheaper and easier, because it did not require as much special training to shoot a gun, [as it does other types of weaponry]. Before firearms, for example, it might take years for a soldier to master the sword or the axe," says Giorgi Gobronidze. "In this case, labor becomes easier."
AI is like a firearm, thinks Giorgi Gobronidze
The firearm example, says Giorgi Gobronidze, shows that with the advent of new technology can come untold advantages, such as efficiencies and improvements in the labor market. But his use of firearms as an example is intentional, because it also shows the potential downside of technological advances such as guns or artificial intelligence. In the wrong hands, with the wrong uses, and with a lack of proper training and regulation, they can become dangerous weapons. As production of the new technology becomes easier and cheaper - as it has done with firearms over the past three centuries and AI in the past few years - the technology can become more easily disseminated throughout society. That's both good and bad.
"[Artificial Intelligence] technology is already several steps ahead of legislation," says Giorgi Gobronidze. "There is an absence of an open dialogue between those who speak the language of technology and those who speak the language of law. It is absolutely impossible to explain how artificial intelligence works to a person who has knowledge of the precedential law and brings their precedents from the 19th century. At the same time that person lacks the essential skills and technical knowledge related to AI technology, or the internet, generally.
For example, most regulators try to protect user's privacy online as if the internet is a plot of land that has a border, however, it is even hard to determine to which country a website can be assigned, when it has a domain of country, servers are located in another and administrator and owner of the website are located in the third country. Accordingly, regulators often tend to turn a blind to an exact problem, such as the existence of websites that infringe on user's rights permanently, and instead target technologies that bring the problem to the surface.
The most serious concern PimEyes has faced is related to the fact that it is an accurate search engine, which searches only the content which is published on public websites, which means that the data, found by PimEyes, already exists and is open for access to literally anyone with an internet connection. PimEyes doesn't create content of its own, but displays what is available."
Where Giorgi Gobronidze thinks technology and law need to meet
Old law, Giorgi Gobronidze argues, is "absolutely irrelevant" to the contemporary situation. In law there are few analogs to AI today, says Giorgi Gobronidze. Even the example of firearms can only go so far before it becomes useless as a model for regulating AI.
"And once there are no analogs, the precedent has to be established. And to make this precedent be established, what is necessary is to have a dialogue between regulators and between companies like us," Giorgi Gobronidze says of his own company, PimEyes. "I envision myself as a partner of regulators, not as someone who is hiding from regulators."
The discussion around creating precedents in AI law is timely, both as a path forward for the regulation of AI, and as an example of what can go wrong when AI is used incorrectly.
AI content generator ChatGPT made headlines earlier this year when lawyers at a New York law firm used the service to help write a brief for a court case. The brief they submitted to court contained content generated by ChatGPT that allegedly was not sufficiently vetted by the lawyers. That content cited precedents and court cases that ChatGPT made up entirely.
Yet AI-driven content management systems are also being used to great effect by media organizations to quickly disseminate news without the need to wait for journalists to write long stories. AI can read market news from companies and then quickly publish factual articles. Artificial Intelligence can be used to edit a journalist's story quickly to a given style.
Giorgi Gobronidze thinks some jobs will be completely transformed
Of vocations that focus on information gathering, analyzing, and disseminating, Giorgi Gobronidze says they will be totally changed by artificial intelligence.
"I have already predicted this - that many professions simply will disappear. But this does not mean that professions will disappear totally because one profession might disappear but will require one other profession to emerge there," says Giorgi Gobronidze. "It means that we will all have to reshape our skills or adjust to the new requirements. This process can be compared to an industrial revolution, which has totally changed, and in many ways simplified requirements of skill for the workforce, which even led to the emergence of new social classes, and therefore had a bigger impact, not only on the economy, but on political thought as well. Therefore, we may claim that AI, as any other type of technology advancing that rapidly, will be able to transform society economically and politically."
Admittedly, this could be "a bit problematic" for people who have fixed lifestyles or professions, says Giorgi Gobronidze, because for some people, it is not very easy to change professions. This process may be painful for many, who are used to working in existing environments, as not everyone will be capable of adjusting to new requirements. Accordingly, the impact of the new technology should be considered by decision-makers, to avoid the emergence of additional social gaps and increasing inequality.
"I have done it several times and I know how difficult that can be, because every time you are starting from [scratch]," says Giorgi Gobronidze.
The people who succeed will need to be good at transforming themselves, he says.
Giorgi Gobronidze's example of AI in social research
Before he was an AI technology entrepreneur, Giorgi Gobronidze was a social scientist. In that discipline, there is a tool called the Statistical Package for Social Sciences (SPSS) that has been around since the 1960s. Over the decades the use of the tool has changed - especially in the past few years.
"I have been studying SSPS for my social research, but now it is very easy to be done by artificial intelligence instead of me," says Giorgi Gobronidze as an example of functions that were once done by humans that can now be accomplished by AI. "So what I'm doing now is, I would not make quantitative queries. Instead, I would make qualitative analysis via the software."
The AI takes over the numbers-heavy quantitative analysis, while Giorgi Gobronidze can concentrate his efforts on the research aspects that are better left to humans - the qualitative analysis.
"There are machines that can do something for me. It means that I simply have to know how to use the technology, and I'm gaining an advantage over my counterparts and rivals in the [social research] labor market," says Giorgi Gobronidze.
Artificial Intelligence is no doubt a society-changing technology. It will help us become more efficient and has the potential to increase our knowledge exponentially. But it also has its risks, both to society as a whole, and the individual. For AI to be a tool for good requires a societal moral compass and smart regulators working with practitioners, technologists, and society as a whole, to plot a path in the right direction, says Giorgi Gobronidze.