Scroll Top

Analyst warns of malware risk in AI software packages, urges vigilance in selection process


In a recent report, analyst firm Endor Labs has raised concerns about the cybersecurity risks associated with the use of ChatGPT’s API program, which integrates artificial intelligence (AI) functionality into existing applications and software. According to Endor’s research team, over 900 software packages currently employ OpenAI’s intelligent software to enhance performance, but existing language-learning models (LLMs) can accurately identify malware in only 5% of cases, or one instance in 20.

While acknowledging the remarkable advancements AI has made since ChatGPT’s mainstream adoption in November, Endor urges organizations of all sizes to exercise the equivalent of due diligence when selecting AI packages. The combination of AI’s widespread popularity and the limited historical data available on its programs creates fertile ground for potential cyberattacks.

Henrik Plate, lead security researcher at Endor, emphasized the importance of monitoring the risks associated with the rapid expansion of AI technologies and their integration into various applications. He stated, “These advances can cause considerable harm if the selected packages introduce malware and other risks to the software supply chain.”

The findings from Endor’s research highlight the need for organizations to prioritize cybersecurity measures when incorporating AI software packages into their systems. It is crucial to thoroughly assess the security protocols and risk mitigation strategies of AI providers to ensure that potential vulnerabilities are adequately addressed.

The report serves as a reminder that the proliferation of AI technologies must be accompanied by robust cybersecurity measures to safeguard against potential threats. As AI continues to revolutionize various industries, organizations must remain vigilant in protecting their software supply chain from potential malware and cyberattacks. By adopting a proactive approach and implementing rigorous evaluation processes, businesses can mitigate the risks associated with AI software packages and safeguard their operations and sensitive data. remains committed to monitoring developments in AI cybersecurity and providing guidance to organizations seeking to harness the benefits of AI while minimizing potential risks. As the reliance on AI-powered solutions grows, maintaining a strong security posture will be paramount to ensure the integrity and resilience of digital infrastructures.

Related Posts

Leave a comment

You must be logged in to post a comment.
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.