This article is designed to systematically guide you to explore, test, and learn about AI technologies in your company. Particularly, what to consider, when to consider it, and what we recommend you consider when making a go/no-go decision. This article is a testing and learning framework designed around the conversation of AI, but reusable in very many other decisioning scenarios for your organization.
Whether you’re interested in machine learning, natural language processing, matching algorithms, AI-powered search functionality, or building a centralized database for use by these technologies, you need a plan.
1. Know what problem (scenario) you want to solve or address
2. Know what possible solutions exist to help you solve it
3. Purposefully and methodically test the ideas in a controlled environment
4. Know what you want to know and when you’ve learned it
5. Know what criteria will motivate you to go to production or kill the project
Deciding how to test, and how to implement AI technology isn’t quite as simple as a five step process. However, this list gives you insight into the remainder of this material, which is designed to guide your conversations, actions, and decisions on your journey to figuring out what you want to do next.
Let’s get into it.
Having a clear objective (business problem) will focus the efforts of your teams, the associative time, financial spend, and the results in a direction that will more likely tell you if you should stop spending money, change your direction, or go forward.
Not having a clear objective will be fun for the people spending your money, but likely to be less useful to the overall business. Particularly, if you know what problem you’d like to solve, it will focus what technologies you need to test, and more importantly, which ones you do not.
a. Salary, retirement, and health policy status for your employees to answer their own questions about the state of their compensation package with your company?
b. Request For Information (RFI) and/or Request For Purchase (RFP) submissions for your sales folks to become more efficient at winning sales opportunities?
c. Policies, procedures, guidelines, FAQs, and device owner manuals for your customer care representatives to more quickly solve incoming phone calls?
d. Sales data regarding go-to-market strategy, target client profiles, past and present prospects, signed and unsigned contracts, financial structures, and terms for your executive team to curate the company’s go-to-market strategy?
e. The one or more source code repositories in your company so that developers are able to better understand what work can be referenced, reused, or must be created new?
Interested in understanding these scenarios to build out yours? Read my previous article, AI: Example Business Use Cases.
a. Will you take full, raw, production copies?
b. Will you take anonymized, subsets or slices?
c. Will you ingest more than one data source and normalize them first?
Plot reveal: We recommend working in an isolated, non-production, test environment using an anonymized subset of your larger data pool in order to start small, not expose PII, or otherwise violate any other cybersecurity or regulatory compliance principles during the initial proof of concept. Eventually you will need to test with a valid copy of the target data. Until you verify the functionality works as you desire, we recommend purposefully containing your exposure by aggressively acting in a security-first posture.
a. Do we connect to our Sales team’s CRM to enable folks answering RFPs to discover reuse opportunities while they are filling out the documents?
b. Do we connect to our People Operations HRIS to enable people to query and then add, remove, or change aspects of their relationship to the company?
c. Do we connect to our corporate AIM tool to manage security consistent with corporate expectations?
d. Do we connect to our corporate source code repositories to allow developers to query for reuse while they write software?
This might be a normal walk-in-the-park day for your IT folks to set up such a proof-of-concept or test environment. This might also be a brand new experience for them. Either way, your engineering team, coupled with someone with an information security/regulatory compliance and/or legal background will be greatly helpful in helping you discover, define, and manage the boundaries of this effort.
For example, given the entire AI endeavor is about making data useful in an interactive manner, using engineering best practices, coupled with adherence to domestic/international information security standards will get you heading in the right direction.
Your testing and training activities should be treated as if they are real. So that when things become real, you already know what to expect and how to behave.
If setting up test environments is something new, learn about our security-by-design approach.
The answer to this question may be influenced by what data is being exposed, who the project sponsors/stakeholders are, and what problem you are solving.
For example, if you are working to expose data in your HRIS system so that employees are able to self-solve when it comes to benefits, the best testers may be people from the People Operations group first. People Operations team members represent the company’s interest, and are themselves individual employees at the same time.
If you are exposing sales data in new ways for new uses, include some of your top sales people to see if and how this information fits into existing workflows, or reveals the need for new or modified workflows. Have them give you perspective on whether the data they are retrieving is actually useful, and ask them to tell you if they believe it will help them be more successful in their jobs.
Recommendation: Given the implementation of this functionality may change cultural and behaviors, use the testing activity as an opportunity to build excitement, support, and messaging inside your company. People who are involved often take ownership, become power-users, and evangelists.
Recommendation: Purposefully construct a test plan so that you have expansive coverage of what you want, what you don’t want, when, by whom, and how.
Absent a purposeful, executed, objective test plan, you may end up with well-intentioned, excited people making emotional business decisions for your company that you later regret.
a. What roles / personas / people types will use this system?
b. What activities will they perform?
c. What activities will they not be allowed to perform?
d. What must work without fail or the system is useless?
e. Is the data being returned correct, complete, and useful?
f. How does a user provide feedback to improve the data, system, and experience?
g. What should they see, when should they see it, and how should it look?
h. Are people able to download or otherwise take a copy of the results?
i. What should a person never be able to see and what is the rejection message?
j. How are any and all events in the system logged for later reference (monitoring, auditing, debugging)? What is logged? What is not logged? Who has access to the logs?
k. Is intellectual property (IP) managed according to corporate legal expectations and recommendations?
l. Is access and identity management, monitoring, and alerting working according to corporate information security expectations and recommendations?
And so on. Know what you want to know, know what you want seen, and know what you do not want accessed and/or seen.
a. How do you know when you’re done testing? (Translated: When are you done spending money?)
b. Is it when someone finishes running the tests they planned? Is it when someone you trust says you are good to go? Is it when a proof of concept committee unanimously agrees? Is it when you know what you wanted to know? Is it a feeling?
Know, in advance of the effort, when it is time to stop spending money and when it is time to kill this project or schedule it for general availability.
a. Do you understand what you are buying?
b. What does it look like when all is well?
c. What does it look like when something goes wrong?
d. Are the information security, regulatory compliance, and intellectual property risks addressed to your liking and to that of your trusted advisors, stakeholders, and buyers?
e. How will you know if the investment is/is not paying off?
f. Who will know first if there is a problem? Your engineering support staff or the end-user?
g. How will the system be monitored, who will be alerted, and how will problems be remediated?
h. What will it cost to buy version one of this system? What will it cost to maintain it?
i. How might this system evolve over the next year? Three years? And what does it mean to your business, your employees, customers, and your reputation?
You will come up with your own go/no-go considerations. Take the time to consider what these are before someone shows up being exceptionally persuasive to mitigate being swayed by excitement instead of ROI.
Know what you want to know. Know when you know it. Decide if, when, and how you adopt this new technology into your business. Like anything else, buying it doesn’t make you better; having a plan, being purposeful, and being willing to change does.
Good luck! And if you need help, we can help.