![]() First, by retrieving payloads from a “benign” remote source rather than an anomalous C2, they hope that BlackMamba traffic would not be seen as malicious. The use of the AI is intended to overcome two challenges the authors perceived were fundamental to evading detection. What is BlackMamba?Īccording to its creators, BlackMamba is a proof-of-concept (PoC) malware that utilizes a benign executable to reach out to a high-reputation AI (OpenAI) at runtime and return synthesized and polymorphic malicious code intended to steal an infected user’s keystrokes. ![]() In this post, we tackle both the specific and general questions raised by PoCs like BlackMamba and LLMs like ChatGPT. Do proof of concepts like BlackMamba open up an entirely new threat category that leaves organizations defenseless without radically new tools and approaches to cybersecurity? Or is “the AI threat” over-hyped and just another development in attacker TTPs like any other, that we can and will adapt to within our current understanding and frameworks?įears around the capabilities of AI-generated software have also led to broader concerns over whether AI technology poses a threat and, if so, how society should respond. The claims associated with this kind of AI-powered tool have raised questions about how well current security solutions are equipped to deal with it. The latest of these attempts, dubbed BlackMamba by its creators, uses generative AI to generate polymorphic malware. Since the emergence of ChatGPT late last year, there have been numerous attempts to see if attackers could harness this or other large language models (LLMs). Artificial Intelligence has been at the heart of SentinelOne’s approach to cybersecurity since its inception, but as we know, security is always an arms race between attackers and defenders.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |