(844) 982-6824 

Using AI for training as “a tool in the toolbox”

Last updated on

Nurse using tablet at desk

Have you heard this lately: 

“AI can assist with a variety of common tasks and requirements.” 

This is the default line about AI in the workplace, your car, the hospital, your kid’s school, inside your dishwasher, and everywhere else AI is being marketed. But tech experts and industry analysts are short on specifics about what exactly those common tasks and requirements are.

Another refrain is “Use AI as a ‘tool in your toolbox’”. How that tool should be used is unclear to most people, and often sounds like the proverbial “When all you have is a hammer…”

In the realm of healthcare, life safety, workplace safety, and emergency response there are some applications of modern-day “AI” at work right now.

In 2023, a small wildfire in the Cleveland National Forest in California was spotted by a camera deep in the woods trained to look for specific signs of a wildfire. The fire was spotted and extinguished. This, however, is not new and is merely re-branding old systems to use “AI” as a label. The camera looked for what appeared to be smoke using the same tech in your home security system that distinguishes the dog from the postal carrier.

Medical researchers are training AI systems to analyze thousands of CT and MRI scans for various cancers, which can be applied to new scans and point out incredibly small or nearly invisible issues to humans or reduce false-positives. This also isn’t “AI” so much as it is intense pattern recognition. Pattern recognition is a building block of AI, just as learning your ABCs is a building block of language skills.

In other scenarios, AI is being touted as the solution for detecting harmful gases and environmental conditions in construction, mining, and manufacturing workplaces. These systems use specialized wearable devices such as smart helmets. Color us impressed but skeptical that this is “AI” either. These kinds of devices had a name before: “sensors”. Detecting high carbon monoxide or particulate matter or radiation has been with us for a long time. 

Separating the marketing from the technologically possible is becoming challenging. And still, the line is always “AI can assist with a variety of common tasks and requirements.” In practice, this increasingly means “ChatGPT can write reports and emails I don’t want to.”

How LLMs work, briefly

If you’re unfamiliar with AI systems, know two things:

  1. They are “calculators for words” through “Large Language Models” (LLMs). Rather than calling them that, most of the early LLM systems realized it was more exciting to call them “AI”. “AI” has a specific definition: “Artificial intelligence is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.” LLMs do approach some level of decision-making and learning, but lack true reasoning and problem-solving. 
  2. LLMs put together statistically most likely “next words”, hence the notion they are “calculators for words.” Think of them as predictive engines for language: astonishingly good at finding the next right word, but they don’t really understand what they’re saying. An example is if someone asked you to fill in the blank: “The cat in the ____”. Statistically, most people will say “hat”, but “box”, “bag”, “house”, “window”, hundreds of other possibilities make sense.

Statistically most likely word choices are, in the briefest way, how LLM systems like ChatGPT work. Providing context (or a “prompt”) can impact these statistical word choices, like if we said, “Fill in the blank again, but recognize you are standing outside the pet shop.” In this prompt, “window” is a more likely next word than “hat” for “Cat in the ___”.

AI as a life safety training tool, evaluator, and on-the-job assistant

It would be malpractice for us to say AI and LLMs can’t or won’t ever be able to do something. But short of advanced robotics and machines, it seems unlikely we’re anywhere close to a mechanism for a robotic emergency department to drive to a call, respond to the emergency, and handle the situation. In case of a fire, someone is going to have to point some water at it.

But for the other day-to-day office work, after action reports, training, looking at sheets to find patterns, and administrative stuff like financial reports, checklists, task sheets, etc., AI systems are going to have a place.

Even if we reject the idea of LLMs today as inaccurate or unhelpful, a generation of young people are entering the workforce that are fully engaged with LLMs for everything. LLMs today are like social media in 2005: you can ignore them or belittle them, but they’re not going away.

LLMs may be able to summarize long emails, provide much better search indexes of your files, and help offer some ideas for creating presentations or documents. They may advance to better uses. Still, all of us here at VPC do not believe AI systems, as they exist today, are a helpful training tool. You can’t get certified by an AI system. An AI is unlikely to be working as an on-site evaluator for most workplaces and teams anytime soon, either.

A common LLM use today is test prep. Test preparation for CHEP, CEDP, and other life-safety certifications is a matter of asking an LLM for test questions similar to what might be found on the exams and asking it to quiz you. As of this writing, we continue to reject this use case, too for the simple reason LLMs must be trained on large amounts of data and the exams for most life safety professionals are not public. LLMs have likely never been trained on real CHEP, CEDP, BDLS, CHFSP, or other industry-standard exams and are therefore repeating material from “somewhere else.”

In our in-person and virtual training courses, we have routinely said “You don’t just rise to the occasion, you fall to the level of your training.” The fundamental difference in using LLMs is between “learning” and “training.” You can’t be “training” by practicing exam questions, but this is not learning. It is, at best, temporary data retention. 

Learning happens through repeatable, deliberate practice. When we introduce real scenarios, provide context, and explain scenarios that have posed unusual challenges in a training or prep course, that provides an invaluable learning opportunity for trainees. 

AI systems will no doubt continue to improve, become more niche and refined, and have even better applications in detecting things humans can’t see — like tumors or unusual patterns in vital signs or building operations. But as any veteran of a hospital, fire department, or other large industry can tell you: the things you see and experience on the job are sometimes unimaginable. Training becomes learning when paired with real-world, on-site, and in-context trainers, scenarios, and preparation.

We've worked with these and dozens of other partners across the U.S.