AI Use Cases are Fundamentally Different

Author:Murphy  |  View: 22997  |  Time: 2025-03-22 21:28:43

AI Use Cases are Fundamentally Different

by Author

The success of integrating AI heavily depends on choosing the right AI use case. This is a perspective for product-oriented technologists before diving into algorithms, data, and engineering.

AI projects often fail because they were actually non-starters, not solving the right human problem or meeting a minimum performance criterion to meet user expectations.

AI is like the software equivalent of duct tape, good for many things but not everything. Similarly, great products help people accomplish specific tasks really well. But AI can do many things, moderately well.

This is a case for finding AI opportunities that can be uniquely solved by AI, when even moderate performance is still valuable

Searching for use cases where AI performs exceptionally well doesn't work. It ignores AI's inherently probabilistic qualities. Instead, searching for use cases with moderate AI performance, will deliver immediate value with lower risk.

Prior work has investigated this topic through unremarkable computing, and Google's drunk island metaphor.

Use cases effectively leveraging moderate AI performance have a history of repeated success. This article describes how end users experience AI, including product examples across 5 categories:

  1. Sensor Fusion
  2. Generative AI
  3. Natural Language Processing
  4. Computer Vision
  5. Autonomous Robots

Sensor Fusion

Data combined from multiple sensors can create more accurate & complete representations. This form of AI is most common in hardware + software products.

One simple example is the step counter. Step counters are well loved, motivating people to exercise daily. But AI doesn't need to work exceptionally well, just reasonably well to be useful.

But AI doesn't need to work exceptionally well, just reasonably well to be useful.

For consumers, a step counter with 90% accuracy (miscounts 1 in 10 steps) is generally useful for monitoring everyday fitness. Intentional design can also complement moderate AI performance. For example, the exact number of steps is designed to not matter so much to the user, as long as they reach some benchmark goal of 10,000 steps or close their rings.

For enterprises, step counters may require higher accuracy between 95–99%, such as specialized clinical research.

Either case, AI is uniquely valuable because even its existence is better than alternatives: (1) a person counting their own steps, or (2) nothing at all. Similar situations where moderate AI performance is useful: facial recognition for tagging photos, live translation, shopping recommendations, voicemail transcription, battery life optimization.

All are examples of narrow AI, often unnoticed, but appearing frequently in everyday life. Narrow AI is designed for a specific task (step counting) within a defined context (physical activity tracking). They are commonly deployed by large companies because post-deployment risk is low, due to constrained guardrails by which the AI capability is designed for use.


Generative AI

The nature of generative AI is it will not serve the same results twice. This works well in use cases where mistakes & some unpredictability are generally accepted: art, music, writing, film & entertainment, gaming etc.

Products like Krea.ai, Superside, and Microsoft cocreator position AI as a creative collaborator, turning sketches into professional compositions. AI just needs to work well enough & produce reasonably good visuals.

Novices, hobbyists, professionals

Generative AI mostly helps the novice, quickly bringing their ideas to life. Novice users also trust AI to do a relatively good job. Professionals, however, come with a unique vision that AI will struggle to produce. And hobbyists (artists, writers, musicians, etc.) experience happy accidents during the emotional highs and lows in the process of creation, where reliance on AI challenges the preservation of this human experience.

Multi-purpose AI & ethics

Generative AI is multi-purpose. But unlike narrow AI, when people can tinker any way they like, rarely can multi-purpose AI be designed safely as to harm no one.

AI is not some natural phenomenon that will just emerge and become dangerous. We design it and we build it, Yann Lecun

Multi-purpose AI is useful across many applications, but also more open for unintended consequences. Concerns exist in use cases across industries, with many examples of models regurgitating content, or creating new content by remixing prior examples.

But applied in the right situations, even moderately performing generative AI is extremely useful: (1) when some mistakes are acceptable, ** (2) when** randomness enhances the user experience, (3) when you want to get from 0–1 not from 1–10, (3) when quick variations of prior work are more useful than crafting original content.


Natural Language Processing

This is the most widely used form of AI today. Across consumer and enterprise use cases, language is ubiquitous, and text data most accessible from digital interactions.

Use cases that are hard for NLP to achieve generally require domain expertise and nuanced language understanding: detecting sarcasm, understanding complex narratives, adapting to new situations that require expert decision making from human experience.

Use cases easier for NLP to achieve can be viewed from two lenses: (1) tasks for an intern, not a domain expert, (2) situations where AI can take a first pass to offload most work, followed by humans completing the remaining work.

  1. Tasks for an intern, not a domain expert. Notion finds relevant keywords from your entire workspace, Otter.ai converts spoken language to text transcriptions, SummarizeBot condenses documents into summaries, Paperpile helps researchers find and cite papers, iOS can translate text in apps and transcribe voice messages. All are workflow-enhancing tasks that don't require domain expertise.
  2. AI can take a first pass, followed by human validation. A spam filter is a simple two-class classifier, labeling emails as spam or not spam. News articles can also be labeled by category: sports, technology, entertainment, politics, other… These are simple tasks that are time-consuming & error prone for a person, but relatively quick and simple for AI.

Both lenses describe situations where AI can effectively manage large volumes of routine tasks prone to human error, **** allowing people to focus on specialized & nuanced work.


Computer Vision

Most of the world's data is visual & spatial. Very little is represented or can be represented through text, even though language is most common today.

It's extremely hard to model the world. Autonomous driving (an aspiration) is continuously promised, but hard to achieve (not available commercially):

A teenager who has never sat behind a steering wheel can learn to drive in about 20 hours, while the best autonomous driving systems today need millions or billions of pieces of labeled training data and millions of reinforcement learning trials in virtual environments. And even then, they fall short of human's ability to drive a car reliably. Yann Lecun (Meta Research)

Instead of looking toward use cases extremely hard for AI to achieve with great accuracy, like autonomous driving, high-stakes decision making, or AI agents that could replace a human, there are many simple AI opportunities that are broadly useful.

Many complex AI use cases start with a simple detection, which can be broadly useful by itself.

For example, autonomous parking is hard to build and serves only a small group of drivers afraid to park in the city. But it starts with a simple prediction any driver may find useful: is this space big enough?

It's hard to build automated AI maintenance systems. But a useful AI feature can simply monitor equipment usage and track breakdowns through visual inspection. Users can build trust in AI, starting from simple feedback loops, gradually moving toward complexity as data allows.

AI headlines tend to promise aspirational use cases, resulting in mixed results historically and recently. But incremental AI solutions can still be delightful, complete products.


Autonomous Robots

The human-machine partnership has been extensively studied in the field of Robotics. Its concepts now trickling into digital product experiences, branded as co-pilots, agentic experiences, AI partners, AI collaborators.

Levels of Autonomy

Consider the 6 levels of autonomy defined in robotics, from no autonomy (level 0, spam filter) to fully autonomous (level 6, a personal assistant capable of independently managing different tasks).

Most AI systems today sit between levels 1–5, semi-autonomous AI with human feedback. These systems are difficult to design because they involve two mutually adaptive agents, the AI system with variable accuracy dependent on conditions of use, and the human user who comes with variable task expertise & AI mental models.

AI System Performance vs. Design

With both non-autonomous and fully autonomous systems, the user's mental model stays relatively the same, expecting AI to work reasonably well & independently, or not at all. Fully autonomous systems rely entirely on AI system performance, notifying users only when needed.

However, the majority of Use Cases today involve semi-autonomous AI. These systems rely mostly on design for (1) moderating user expectations based on system performance, and (2) allowing different workflows depending on user trust in the system. For example, users may trust AI to work independently with no oversight, or want transparency into AI systems to know whether & when to intervene by having some oversight, or decide to complete the task without AI at all.

One successful example is Roomba, the robotic vacuum cleaner. People can let Roomba roam free (fully autonomous), intervene when it gets stuck (semi-autonomous), or pause the Roomba's schedule and instead clean spaces manually (no autonomy).

Even with moderate AI performance, such as missing some spots or getting stuck, Roomba still offers tremendous convenience. It's a uniquely simple human-machine partnership where the robot's intention is obvious, because its movements are visible.

As system performance improves, and as people use the product more for its practical value, users' trust in the system also increases.


AI use cases are fundamentally different

AI performance is improving quickly. But unlike traditional software, AI results are inherently indeterministic.

The challenge with AI projects is the solution is already implied. Successful projects gain traction when they effectively sell a problem first. But instead of starting with a valuable user problem, working on AI projects suggests the solution is already predetermined to be AI.

This suggests for a different way to search for use cases, matchmaking human needs with probabilistic systems.

Use cases that require AI to perform exceptionally well tend to underwhelm expectations. They set a high bar that AI cannot reasonably achieve in every scenario for every user.

Instead, finding use cases where moderate AI performance is still valuable starts at a baseline where users already gain value.

As accuracy improves and hallucinations reduce, the user experience naturally improves too.


See the prelude to this article:

Why Do AI Projects Fail?


Thanks for reading! I'd love to know your thoughts

Elaine writes about design, AI, emergent tech. Follow for more, or connect on LinkedIn.

Tags: AI Notes From Industry Product Development Robotics Use Cases

Comment