With the rapid advancement of artificial intelligence (AI), new and potentially hazardous uses are also becoming possible. The possibility that artificial intelligence (AI) will be used to create fake viruses that could start pandemics is one of the more unsettling ones.
Numerous studies have looked into the possibility of using AI to produce artificial viruses in recent years. One study indicated that AI might be used to create viruses that are more contagious and lethal than naturally existing viruses. Nature Biotechnology Journal published this study.
The authors of the study cautioned that this would represent a significant risk to world health since it might make it simpler for terrorists or other nefarious actors to develop and disseminate lethal diseases. Another investigation, which was reported in the 2023 issue of the magazine Science, discovered that AI may be employed to produce artificial viruses that are resistant to current vaccines and therapies. If a pandemic were to be brought on by a man-made virus, this might make it considerably harder to contain it.
Although it is crucial to keep in mind that AI is a tool and like any tool, it may be used for good or evil, the possibility that it will be utilised to develop synthetic viruses is a real concern. It is our responsibility to make sure AI is applied responsibly and morally.
There are more possible concerns related to AI-generated synthetic viruses in addition to the ones already listed. AI might be used, for instance, to develop viruses that are intended to infect persons of a certain race or ethnicity. This might result in the creation of bioweapons that could be used to carry out atrocities like genocide.
Mustafa Suleyman, co-founder of Google DeepMind, initially raised the possibility that AI may be used to create artificial diseases and start pandemics in a podcast interview in July 2023. Suleyman foresaw a “darkest scenario” in which individuals experimented with infections to generate more contagious or deadly viruses.
Other authorities in the field of AI safety have repeated Suleyman’s caution. Researchers at Georgetown University’s Centre for Security and Rising Technology suggested in a recent essay that “the potential for AI to be used to create and deploy synthetic viruses is a serious and emerging threat.”
The researchers demanded the creation of global standards and laws to control the application of AI in the biological sciences. Additionally, they asked governments to spend money on research into AI security and to create tools that could be used to spot and stop the use of AI for bad intentions.
Some measures to prevent AI from being used to generate synthetic viruses
Improving synthetic DNA screening techniques. By doing this, hostile actors would have a harder time getting the supplies they need to construct fake viruses.
Worldwide biosecurity laws are being strengthened. This would aid in limiting the cross-border spread of artificial viruses.
Educating people about the dangers of artificially intelligently created fake infections. This would encourage people to report unusual activities and assist in spreading awareness of the problem.
The creation of technology that can be used to recognise and stop the malicious usage of AI. This can entail creating artificial intelligence (AI) systems that can recognise and alert users to questionable activities or creating physical security measures to guard against unauthorised access to AI systems.
Promoting ethical research methods in the field of AI. This can entail setting standards of behaviour for researchers or supporting programmes that support research that is carried out in a responsible and ethical way.
It’s crucial to remember that there isn’t a single fix that will entirely stop AI from being used to create artificial viruses. However, by adopting a thorough strategy that combines a number of these countermeasures, we may help to lessen the risks and make it more challenging for nefarious actors to be successful.
In addition to the measures mentioned above, there are also a number of other things that can be done to prevent AI from being used to generate synthetic viruses. These include:
Ensuring that security and safety are considered during the construction of AI systems. This can entail creating AI systems that are hard to hack or manipulate, or that can recognise and stop hostile activities.
Encouraging accountability and openness in the creation and application of AI. This can entail mandating that researchers publish their research findings or setting up methods for citizen oversight of AI systems.
Fostering a culture of ethical AI. This can entail educating the public about the dangers of AI or enticing scientists to create AI systems that are beneficial.
It is important to keep in mind that artificial intelligence is a strong instrument, and it is up to us to utilise it wisely. To stop AI from being misused for bad, we need to create safeguards. In order for people to be aware of the dangers and take precautions, we also need to inform the public about the perils of artificial intelligence-generated synthetic viruses.
Although it is a serious worry, the possibility that artificial intelligence may be utilised to create synthetic viruses can be overcome. We can contribute to ensuring that AI is utilised for good and not for evil by taking actions to reduce the dangers and encourage responsible AI.