Military and Strategic Journal
Issued by the Directorate of Morale Guidance at the General Command of the Armed Forces
United Arab Emirates
Founded in August 1971

2024-06-06

AI: A Cybersecurity Tool and Threat ..We want AI to fight against AI

In the realm of cybersecurity, the intersection of artificial intelligence (AI) presents a dual role: a robust shield against threats, yet also a looming peril. 
 
Cybersecurity leaders must navigate a complex and ever-changing environment where cyberattacks can range from sophisticated state-sponsored intrusions to opportunistic malware. 
 
What is Dark AI?
‘Dark AI’ is the use of artificial intelligence technologies for unethical, malicious or illegal purposes. Many of the direct negative impacts from AI will not be seen until they materialise, as it is in the nature of the adversary to work in the shadows. 
 
However, there will also be significant, but progressive indirect impacts that AI will have on the threat landscape. 
 
Below are a series of anticipated phases that outline the incrementing influence, capability and impact of the adversary using Dark AI. 
 
With AI’s transformative potential comes a pressing need for ethical considerations, proactive defence strategies, and collaborative efforts within the cybersecurity community. 
Dark AI looms as a significant concern, emphasising the importance of understanding and countering malicious AI-driven threats. 
 
As organisations seek to integrate AI into their business practices, they must navigate complex ethical landscapes while embracing innovative solutions to bolster cybersecurity defences. 
 
Against the backdrop of emerging technologies like cybersecurity mesh, the challenge lies in fostering resilience and adaptability in the face of evolving threats. This dynamic landscape demands continuous vigilance, innovation, and strategic foresight to ensure a secure and prosperous digital future.
 
Exploring how hackers leverage AI for nefarious purposes illuminates the urgency for organisations to fortify their defences against such attacks. By understanding these evolving strategies, organisations can proactively safeguard their digital assets and mitigate the risks posed by AI-driven threats.
 
Cybersecurity Themes  
Undoubtedly, AI emerges as one of the predominant themes in cybersecurity. It manifests in various ways, such as how companies are allowing employees to utilise AI and how security teams are strategising to safeguard company data and ensure proper usage. Additionally, there’s the critical aspect of how security teams plan to defend against sophisticated attacks, posing significant challenges. 
 
There’s a common misconception surrounding AI, often seen as unattainable due to the belief that building and utilising models demands extensive processing power and resources. 
A notable shift is also occurring with the rise of ‘dark AI.’ With the ability to download models from platforms like GitHub and deploy them on various systems, hackers now have access to sophisticated AI tools that can be used for malicious purposes.

While these AI models may have been developed for legitimate uses, their misuse by hackers underscores the potential dangers of AI in cyberattacks.
 
This points to a notable ability to access pre-built models from data repositories, challenging the conventional narrative. As cyber-attacks evolve, there’s a discernible shift in tactics. What we once classified as independent hackers are now often ‘conscripted,’ leveraging pre-existing, sophisticated attack methods.
 
Consequently, hackers now possess what could be termed as malevolent AI, which elevates their skill set. On the positive side, security teams are responding by enhancing the skill set of their security operations teams. Moreover, ransomware remains a significant challenge, a topic that will be delved into at an upcoming security summit in DC.
 
Recruited into orchestrating intricate and large-scale attacks, hackers are now replicating these methodologies across multiple targets. This propagation of tactics underscores the emergence of dark AI. Cloud security naturally remains a top concern among professionals.
 
Impact of AI on Cybersecurity
AI’s influence on cyber security is explosive. The surge in AI adoption across various industries has become somewhat of a marketing phenomenon. Companies and vendors alike are quick to tout their AI capabilities, leading to a proliferation of AI-driven solutions in the market. 
 
Additionally, there’s anticipation surrounding the concept of artificial general intelligence (AGI), hinting at future advancements in AI technology. Presently, the world is witnessing the integration of AI into existing systems, with increasingly sophisticated language models addressing complex challenges. 
 
However, amidst these advancements, a notable challenge arises in the form of AI “hallucinations.” This refers to instances where AI provides inaccurate responses due to flawed data or malicious model alterations. 
 
Consequently, cybersecurity teams are fielding requests from businesses eager to harness AI, particularly in AI-enabled search engines. From an IT perspective, safeguarding corporate data is paramount, especially considering the potential risks associated with exposing classified information to the Internet. Hackers could exploit vulnerabilities in AI systems, compromising sensitive projects and harming business interests. 
 
To mitigate these risks, cybersecurity experts advocate a cautious approach, emphasising the importance of identifying viable business cases for AI adoption, providing comprehensive employee training on AI usage, and implementing measures to prevent unauthorised access to sensitive data. Establishing private AI environments may also be considered to bolster security measures. Amidst these challenges, cybersecurity teams find themselves grappling with complex issues in a relatively short timeframe, underscoring the urgency of proactive risk management strategies.
 
Potential Risks Associated with Artificial General Intelligence 
Artificial General Intelligence (AGI) represents the pinnacle of AI capabilities, where machines can perform tasks as proficiently as, or even better than, humans. 
However, even with advancements, there are instances where AGI may still struggle, such as when faced with questions it doesn’t comprehend. 
 
AGI stands as a significant milestone in the evolution of AI, with projects like ChatGPT5 and initiatives by personalities like Elon Musk indicating strides towards this goal. Rumours suggest that breakthroughs have been made in overcoming key challenges, driving anticipation for the potential capabilities of AGI.
 
Indeed, the prospect of AI surpassing human capabilities raises concerns for many individuals. The fear of AI replacing human workers is a prevalent one, as the concept of AI becoming indistinguishable from humans blurs the lines between reality and science fiction. 
 
The idea of AI achieving or even surpassing human-level intelligence prompts questions about control and regulation. With access to vast computational resources, AI may indeed outpace human cognitive abilities exponentially. 
 
However, the goal of AGI is not to supplant humans but rather to address challenges and achieve parity with the human brain. Contemplating how to control and manage AI once it surpasses human capabilities becomes crucial in navigating this uncertain future.
 
Bad Actors Using AI Against Enterprises
In simpler terms, imagine a standard gaming setup - a typical PC with components like an Nvidia graphics card, an AMD processor, about 16 GB memory, and an Intel i7 or i9 processor. 
 
With a setup like this, we can download and virtualise powerful AI models, essentially turning our gaming rig into a potent AI machine. While this setup might cost around US$3,000 to US$5,000, it can go even higher for more advanced configurations. 
 
Once set up, the Nvidia graphics card can handle the heavy lifting, processing data and learning from it. There are even large language models available on certain platforms, which can offer access to incredibly advanced AI models. 
 
If you’re willing to venture into darker corners of the internet, you can find even more sophisticated options.
 
In the realm of dark AI, the constraints of ethics and morality are often disregarded. Unlike regular AI systems, which might refuse unethical requests, dark AI platforms readily accommodate nefarious queries. 
 
For example, while conventional AI platforms might refuse to provide instructions for illegal activities like lock-picking, underground forums or dark web platforms offer detailed guides on executing sophisticated cyber attacks. These include strategies such as leveraging vanishing proxies or VPN services to conceal one’s digital footprint, downloading specialised attack kits tailored for targeting Fortune 100 companies, and executing prescribed actions to compromise their security defences. 
 
The accessibility of millions of pre-existing learning models further facilitates such malicious endeavours, making the proliferation of dark AI a particularly concerning development in cybersecurity.
 
Defending Against Threats Posed by Dark AI 
Many companies depend on third-party providers for essential data and services. However, this partnership introduces risks as the third party may have different biases and a risk tolerance that doesn’t align with the company’s expectations or standards. This mismatch can lead to vulnerabilities, including rushed development that’s lacking in security measures and increased susceptibility to manipulation.
 
Security is mainly based on three principles: confidentiality, integrity, and availability, and any controls being put in place are to protect these. As techniques advance in the ability to attack those principles, defences must become more advanced. 
 
Companies can mitigate risks through:
Comprehensive defence strategy: It’s important for businesses to vet and monitor AI systems, assess the reliability of third-party involvements, and support against a wide array of potential threats, including those posed by disingenuous users and corrupted algorithms.
 
Responsible governance and disclosure: Threats to cybersecurity and moral dangers need balanced governance. The absence of proactive measures could lead to not just reputational damage but also an erosion of trust in entire industries.
 
Responsible AI practices: From developers to businesses, responsible AI practices such as a human-centred design approach, privacy and security of data, transparency, and accountability must be ingrained at every value chain stage.
 
Regulatory compliance: Stay up to date with evolving regulations and standards related to AI and cybersecurity, such as ISO 27001 or the National Institute of Standards and Technology (NIST) cybersecurity framework. Ensure compliance with relevant regulations to avoid legal and regulatory risks.
 
To counter AI threats effectively, organisations must engage in simulations and employ robust processing power dedicated to combating cyber threats. The traditional categorisation of hackers into black hat, white hat, and grey hat serves as a framework for understanding the spectrum of motives and behaviours in the cybersecurity landscape.
 
Cloud Security 
It seems like one of the most significant trends impacting cybersecurity has been the shift towards cloud computing. In the past, security measures often revolved around protecting organisations like fortresses, with layers of defences such as firewalls and intrusion detection systems. However, with the advent of cloud computing, data storage and processing have moved away from personal computers and onto centralised cloud platforms provided by major companies like Microsoft, Google, and Facebook. 
 
This shift parallels the evolution of AI, which is now integrated into these cloud services. 
 
Initially, many companies were hesitant to adopt cloud technology until they understood how to secure Software as a Service (SaaS) offerings effectively. As a result, security vendors have had to rapidly adapt to meet the evolving needs of businesses in this new cloud-centric environment. 
 
Additionally, there’s a growing consideration of the role of human developers in generating secure code, reflecting the broader trend of incorporating security into the development lifecycle.
 
AI’s Impact in Cybersecurity Sector
Earlier, the focus was on human-generated code, with security teams training human developers to run security tools after writing code to ensure its safety. However, AI is changing this landscape by autonomously generating code, making the concept of “anyone can code” a reality. 
 
AI is becoming integral to various business tools and processes, including fraud detection in banking and AI assistants for credit card companies. As AI permeates different use cases, cybersecurity teams have expanded in size and complexity. 
 
With AI handling tasks like answering emails and writing applications, it’s crucial for security teams to understand and enforce boundaries to ensure that AI deployments don’t compromise organisational security.
 
Data Protection and AI Integration
To ensure robust data protection when integrating AI into operations, organisations should first align AI initiatives with business requirements. This involves understanding where and how AI is being utilised within the organisation to tailor protection efforts accordingly. 
 
The overarching goal of AI protection should be safeguarding corporate data from unauthorised access or leakage. Implementing secure Web gateways helps control and monitor internet traffic, preventing unauthorised access to AI-related resources.
 
Encryption plays a crucial role in data protection, ensuring that even if files are compromised, AI systems cannot interpret them. Employing encryption for the highest security files mitigates the risk of data breaches. 
 
Additionally, organisations should implement measures to block known malicious AI models or sources, reducing the risk of data exposure.
 
Controlling access to AI resources through strict access controls and approval processes is essential. By regulating access, only authorised personnel can access and utilise AI systems, minimising the risk of unauthorised data access.
 
Data classification policies should be established to categorise data based on its sensitivity level. This allows organisations to prioritise protection efforts and apply appropriate security measures to each data category.
 
Regular employee training sessions are crucial to educate employees on AI-related security risks and best practices. Ensuring that employees understand their roles and responsibilities in safeguarding data when using AI technologies is vital for overall data protection efforts.
 
By following these recommendations, organisations can effectively protect their data while leveraging AI for business innovation and growth.
 
Gartner Cybersecurity Mesh project
Cybersecurity Mesh (CSM) is a project Patrick Hevesi, VP Analyst with Gartner for Technical Professionals, who is also current Conference Chair for the Gartner U.S. Security Summit, initiated around five years ago.  
 
The primary focus was on enhancing defence strategies. 
 
Cybersecurity mesh, or cybersecurity mesh architecture (CSMA), is a collaborative ecosystem of tools and controls to secure a modern, distributed enterprise. It builds on a strategy of integrating composable, distributed security tools by centralising the data and control plane to achieve more effective collaboration between tools. 
 
Outcomes include enhanced capabilities for detection, more efficient responses, consistent policy, posture and playbook management, and more adaptive and granular access control — all of which lead to better security.
 
The concept of CSM is rooted in the utilisation of security analytics and real-time intelligence data, facilitated by technologies like generative AI. This approach involves capturing signals of unusual behaviour, whether it’s anomalies in user activity or abnormal cloud behaviour such as unusual logins. 
 
By aggregating and analysing these behavioural signals in real-time, CSM aims to detect deviations from the norm and proactively assess the risk level.
 
Rather than waiting for an attack to occur, CSM identifies changes in patterns as soon as they deviate from the baseline, allowing security teams to map these deviations to known attack chains and predict potential future attacks. 
 
This predictive model, empowered by AI, enables security professionals to take pre-emptive action and intervene before an attack unfolds. 
 
By identifying patterns that align with various attack chains, security teams can anticipate the next steps of potential adversaries and develop proactive countermeasures accordingly.
In essence, CSM enables security teams to stay one step ahead of cyber threats by leveraging predictive analytics and AI capabilities to predict and mitigate potential attacks before they occur. 
 
This proactive approach represents a significant advancement in cybersecurity defence strategies, allowing organisations to effectively combat evolving cyber threats in real-time. The mesh architecture represents a collaborative effort to empower enterprises to stay ahead of cyber threats, rather than solely engaging in reactive firefighting measures.
 
From Scratch to Shield
“Understanding AI is critical,” says Patrick Hevesi, “because, like any technology, it has the potential for both good and bad outcomes. However, as a cybersecurity professional, I thrive on the daily challenges. Years in the field have instilled in me a passion for new technologies, even if it means starting from scratch. This knowledge empowers us to build strong defences, formulate effective strategies, and train organisations”.
 
Credit: Podcast of Patrick Hevesi, VP Analyst with Gartner
 

Add Comment

Your comment was successfully added!

Visitors Comments

No Comments

Related Topics

Ammonia Fuels Hope for Carbon-free Aviation

Read More

RULING THE WAVES

Read More

Armoured Vehicles Steal the Show at IDEX 2021

Read More

Raytheon Pitches for Connected Aviation Ecosystem

Read More

A Robotic Future for the United States Army

Read More

TRAINERS VITAL FOR MILITARY AVIATION

Read More
Close

2024-10-02 Current issue
Pervious issues
2017-05-13
2014-03-16
2012-01-01
2014-01-01
2021-06-01
2021-02-21
2022-06-01
2021-09-15
.

Voting

?What about new design for our website

  • Excellent
  • Very Good
  • Good
Voting Number 1647