Are Federal Agencies Ready for an AI Cyber Attack Showdown?

Could hackers use AI to blow up a power plant? According to Stuart Madnick, professor and co-founder Cybersecurity at ...

Could hackers use AI to blow up a power plant? According to Stuart Madnick, professor and co-founder Cybersecurity at MIT Sloan (CAMS), that’s a real possibility. In an interview with CNBC, the cybersecurity expert painted a grim picture, but one that government agencies can take as a cautionary tale.

Here is his scenario: cybercriminals, using generative AI to insert malicious code into programmable logic controllers (PLCs), could overload systems designed to regulate temperature and pressure in power plants. Rather than causing a temporary shutdown, this strategy could cause explosions or meltdowns that disrupt power for weeks or months until replacement parts are ready.

Is this an extreme situation? Yes, but security experts are simulating a range of nightmarish AI-powered U.S. cyberattacks that can immobilize the operations, impair the capabilities, and compromise the data of state and federal agencies.

This next generation of tactics, techniques, and procedures will surpass anything your cyber defense experiences. The question is are you ready for these crafty and persistent attacks? And are you prepared to use artificial intelligence for your organization’s defense?

Threats to Government Agencies from Cyber Security

Government agencies are no strangers to cybercriminals. Hackers using ransomware shut down servers in the city of Baltimore, hurting government services for weeks. Over 22.1 million federal personnel records, some regarding security clearance, were stolen from the Office of Personnel Management. These and other detrimental attacks were accomplished without AI.

So, how will artificial intelligence enhance the damage hackers are already doing? By lowering the barrier of difficulty. In the past, cyberattacks were limited by three factors:

  • Capability, or the technical or persuasive skills of the hacker.
  • Opportunity, or the degree of vulnerability presented in a given moment or system.
  • Motivation, or how badly cybercriminals wanted to steal data or compromise a system.

With AI at their fingertips, mid-tier hackers or even highly motivated low-tech criminals can increase the sophistication, frequency, and effectiveness of cyberattacks. AI-generated social engineering prompts, ransomware cloaked from detection by machine learning, flying brute force attacks, and more.

What’s worse is that no organization will be immune to threats. Cybercriminals were already trying to find tactics and attacks that stuck – now they have an automated machine that can test attacks and evolve in real time. Here are just a few examples in the government space:

  • Generative AI tools could be used to research government agencies and employees, improving the success of their spear-phishing tactics.
  • Sensitive data, loaded into ChatGPT by government employees seeking new insights, can be compromised if those systems are breached.
  • Data poisoning could be used to insert skewed or inaccurate datasets into AI models to compromise outputs, warping reports and policy.

Those are just some of the more predictable possibilities. Experts are increasingly identifying emergent abilities in AI, meaning the ability to complete tasks the programs were not initially designed to do. This can result in, among other things, unanticipated tactics and attack methods no human has anticipated.

How Government Agencies Are Adapting Cybersecurity Strategies to AI

For these and other reasons, government agencies need to recognize they have to join hackers in using artificial intelligence if they’re going to beat them. Artificial intelligence can maximize your ability to monitor threat surfaces, analyze emerging malware, or even generate unexpected defenses on the fly.

Some agencies are exploring AI cybersecurity options. It’s no surprise, for example, that the Department of Homeland security is using artificial intelligence to defend high-profile targets. Cybersecurity and Infrastructure Security Agency (CISA) is currently using the tech to identify, monitor, and report on vulnerabilities in the nation’s infrastructure system. Those threats to power plants and pipelines? AI can deescalate much of the risk.

CISA even offers a roadmap that not only outlines their plan but can help to shape some of your own departmental efforts. Here are the five pillars they use to direct their AI efforts (you can read about them in full at their website):

  • Responsibly use AI to support our mission.
  • Assure AI systems.
  • Protect critical infrastructure from malicious use of AI.
  • Collaborate and communicate on key AI efforts with the interagency, international partners, and the public.
  • Expand AI expertise in our workforce.

Though not all these principles will translate one for one across agencies, they offer some food for thought that can help you take your first step into AI cybersecurity. Remember, there’s no need to trailblaze your AI cyber defenses on your own.

Taking the First Step Toward Implementing AI Cybersecurity

When we speak to public and private sector leaders, they have an appetite for implementing AI but worry about the risks. That concern comes from the top down. In a press call in March, ghlighted the importance of AI that advances the interests of the American public. As part of that ideal, government agencies are expected to follow “three new binding requirements to promote the safe, secure, and responsible use of AI.”

Under these requirements, all government agencies must:

  • Verify AI platforms take precautions to protect the rights and safety of the American people.
  • Offer transparency about when and how their agency uses AI.
  • Designate a chief AI officer with the experience, expertise, and authority to oversee all AI used by that agency.

From a cybersecurity perspective, these mandates are the perfect guardrails for government agencies to use artificial intelligence in a way that mitigates a significant amount of risk. Though these mandates offer guardrails, the open-ended nature doesn’t necessarily help to implement artificial intelligence in a moral or ethical way.

Working with a partner like Dexian IT Solutions can simplify your exploration of AI cybersecurity. Since we’re in the business for good, our approach is always ethics first. We think of the consequences before we act. That way, whether you’re worried about data breaches, compromised systems, or a full meltdown, you’ll have the AI solutions to mitigate your risk.

Looking for guidance on defending your agency with the right cybersecurity solution or the best talent? Contact Dexian.