AI: Preparing Your Company

No one wants left behind in the race to implement AI solutions. Here are some key steps to get your house in order to allow for adopting AI responsibly.

Matthew D Edwards
November 14, 2023
AI generated people looking out at lighted city through blinds

This is an exciting time to lead change in business and technology. If you're interested in exploring and adopting one or more aspects of artificial intelligence in your organization, there are a number of preliminary organizational steps that we recommend you take which will enable purposeful, secure, reliable outcomes.

1. Know what problem you want to solve

Take time to discover, define, and refine what problem you want to solve and what it looks like (characteristics, attributes) when solved. In other words, know why you want to spend money and, in advance, set an objective metric to know when you are done spending money.

2. Inventory your house


Find out what data you have, where it is stored, in what form, who owns it, what people use it, what systems use it, how, when, and under what circumstances.


Do the same thing for every software system your company uses, whether COTS, OSS, SAAS, etc. Where is it, who owns it, how is it used, when, by whom, under what circumstances, and particularly, discover what systems are integrated with what other systems to understand permissions, security, and data.

IP Agreements

Allow this to lead you to a deeper understanding of your intellectual property positioning. If you built the software, then you most likely own the intellectual property; however, if you do not own the software, do you own your data? Do you have a method of getting it out for other purposes? You'll need to look at the intellectual property and usage agreements between your company and theirs.

3. Evaluate (or build) your information security posture

Do you have security, governance, and operational controls in place to proactively manage, monitor, and protect your intellectual property?

Realize that your folks will want to test their AI implementations with your company’s data. Before you open the door, your data must likely be anonymized, stripped of personally identifiable information (PII), encrypted at rest and in transit, and adhere to regulatory compliance standards. That takes a bit of work right from the start. Your company must also align to external compliance standards such as GDPR, CCPA, HIPAA, CMMC, NIST, or SOC2, along with a bit more preparatory work as well.

You’ll need to consider things like access and identity management, event monitoring and alerting, and remediation to name a few things that will make any CISO smile.

4. Legal and ethical considerations

Take the time to evaluate whether the AI solutions you desire align with the industries, regions, and countries in which you work. Different countries have different laws.

For example, the perspective of data management between the United States, Canada, and the European Union differs enough that a mistake will cost you money, time, and reputation. It will likely be significantly more as it relates to AI adoption.

As well, consider the ethical implications of the AI, the dataset, the decisions, and the follow-on actions when it comes to areas of sensitivity such as healthcare, finance, law enforcement, and Geo-politics. Is this the right place for you to be considering AI solutions?

As you might expect, this will be one of those important moments to consider if, how, and where any type of bias may exist in your data acquisition, management, and usage behaviors in the company. It will require you to understand what, if any, bias exists, why, where in the process it comes to exist, and how you may need to manage for it going forward.

As a company who is serious about leveraging artificial intelligence, you’ll want to constantly seek out tools and methods to limit vulnerabilities that AI might be exposing you to along the way.

5. Evaluate (or build) your security-first software development practices

Security-first development is a cultural choice. If your company doesn't follow a predictable, repeatable, and auditable method of building, testing, and delivering software, now is the time.

Security practices work best when built directly into the lifecycle from inception to production implementation (general availability) and on-going support and maintenance. One of the most powerful methods of implementing predictable, repeatable, and auditable security (and quality) practices is through automation. Manual practices have given us varied results through the decades. Today we find automation works even better.

As a company who is serious about leveraging artificial intelligence, you’ll want to constantly seek out tools and methods to limit vulnerabilities that AI might be exposing you to along the way.

6. Evaluate your lifecycle test strategy

In our opinion, test-driven development (TDD) is a non-negotiable best practice. It helps build the right software simply, and helps downrange people easily understand when new changes break existing working systems now instead of in production when it counts the most.

While we consider TDD a juggernaut addition to any software development ecosystem, there are many additional types of checks that must occur including static and dynamic inspection, vulnerability assessment, penetration testing, user threading and more. No matter where your organization is on the journey of preventing and detecting issues with the system, AI development will require an additional mindset – adversarial testing.

One or more persons in your company need to think of ways the system will be used, and in particular, how it might be exploited. Whether through human interaction, system integrations, data exchanges, system updates, etc.

What are the ways to interact with this system? For example, batch files, APIs, system to system custom integrations, internal administrators and security folks, and different types of external customers. How could those means be exploited, by whom, and what will be the potential impact?

We all know the old phrase, “Garbage in, garbage out.” Another aspect of garbage data getting into your system is the increased probability of exploitation.

7. Third-party vendors and partners

Given you are planning to use AI, and you might be planning to put all of your data into a big bucket as an LLM for an AI to leverage, coupled with a new corporate security plan, software development, testing, and data plan. It will be important for you to evaluate if your existing vendors, partners, and associative agreements need to be revised based upon your new AI motivated business policies. What worked yesterday pre-AI, may not work tomorrow in your new AI world.

  • Do they meet your security standards?
  • How often will you reassess?
  • How will you change your third-party or vendor vetting and adoption process?
  • What is your process for separating from existing third-party vendors and partners?

8. Employee training and corporate messaging

This is one of those times when people need to understand what is happening, what it means, how to use it well, and what is unacceptable. Provide opportunities for introductory through advanced training, whether internal or provided by a vendor or partner. There are many learning styles. Determining who needs to know what and when will greatly influence your corporate training plan.

Folks in your company will understand the potential value to the company if all works out as planned. Emphasize the cost to the company in the event things are used unprofessionally, unethically, and/or irresponsibly.

You may also need to consider a formal kick-off event for this program, a plan to keep folks up to date on the status, training opportunities, risks, and results. In some companies, this looks like a corporate training and communication plan. You’ll have to decide what makes sense for you and your teammates.

Go. Explore. Test. Learn. Change.

We recommend you prepare your enterprise for the changes that AI may bring to your company, culture, and liability in advance.

Read the Rest of the Series