Science

Stop guessing who can work with AI

Introducing Maki AI proficiency assessment

April 3, 2026

1 mins

Juliette Santelmo

You can't hire for AI capability if you can't measure it

Every hiring team is under pressure to find people who can actually work with AI. Most are failing at it in silence.

Not because they aren't trying. Because the tools don't exist. A candidate writes "proficient with AI tools" on their resume. You ask a question in the interview. They answer fluently, confidently, and in enough detail to sound credible. You move them forward.

Then they join. And you find out that sounding fluent about AI is not the same as being able to use it effectively under real conditions.

This is the dominant pattern in AI hiring right now. Self-reported confidence is the primary signal, and it is the weakest one available. There is no standard, no structured measurement, and no consistency across hiring decisions. Companies are making consequential choices about who can work with AI based on guesswork and gut feel.

That gap is now closed.

Introducing AI proficiency assessment in Maki

Maki can now test for AI proficiency. It puts candidates in realistic work scenarios and measures what they can actually do with AI, not what they claim.

The assessment is built on a validated four-dimensional framework, developed by Maki's science team using etablished psychometric processes. This is not a tool-name quiz or a knowledge checklist. It is a structured, evidence-based method for measuring the capabilities that actually matter when someone uses AI as part of their daily work.

It is available today for existing Shiro and Mochi customers, with no additional cost and no changes to your current workflows.

The four dimensions of AI proficiency at work

Most attempts to define "AI proficiency" land on a list of platforms. Have you used ChatGPT? Copilot? Gemini? That is tool familiarity, not capability, and it tells you almost nothing about performance.

Maki's framework identifies four dimensions of what it actually takes to work effectively with AI in a knowledge work context.

AI tool agility is how quickly and effectively someone can learn, adapt to, and troubleshoot AI tools. The AI landscape changes fast. The people who perform well are not the ones who memorized a specific tool. They are the ones who can pick up a new one, figure out what it can and cannot do, and get useful outputs from it quickly.

Human-tool interactivity is the quality of someone's collaboration with AI. Can they prompt well? Can they evaluate outputs critically and refine them? Can they integrate AI into a workflow in a way that actually improves the output, rather than just adding a step? This is where a significant portion of real-world AI performance lives, and it is almost entirely invisible in a traditional interview.

Ethical use of AI covers awareness of risk, fairness, and responsible use, including the judgment to know when not to use AI at all. As AI-assisted decisions become more common across functions, this dimension is no longer optional. Candidates who lack it are a liability.

Data and algorithmic literacy is a baseline understanding of how AI systems work, what affects output quality, and how to recognize when something has gone wrong. This is not a technical qualification. It is the practical knowledge a knowledge worker needs to use AI outputs responsibly.

Together, these four dimensions give a picture of how someone operates with AI in practice, not just whether they have heard of it.

Three methods, not one quiz

A single multiple-choice test cannot capture all four dimensions reliably. That is why Maki uses a multi-method approach.

Situational judgment tests present candidates with hypothetical work scenarios involving AI. They are asked to decide how they would use AI to solve a problem, troubleshoot an issue, or improve an outcome. You see how they think and prioritize, not how they describe what they might do in the abstract.

Structured behavioral grids capture how candidates have actually applied AI in practice. These are behavioral questions with a structured response format, designed to surface real past experience rather than rehearsed answers. The signal is observable and comparable across candidates.

Conversational cognitive tasks, coming in a future Mochi release (Mochi 3.5), will go further. Open-ended, complex problems where you can see how someone approaches a real challenge using AI. The quality of their judgment, their approach to uncertainty, and their ability to integrate AI meaningfully into problem-solving will all be visible.

Each method maps deliberately to the four dimensions of the framework. The combination gives a more complete and reliable signal than any single format could provide.

Live now in Shiro and Mochi

Situational judgment tests and structured behavioral grids are available today inside Shiro and Mochi. If you are an existing customer, there is nothing to integrate, no new platform to onboard, and no change to your pricing.

Drop the assessments into existing role profiles and flows. The AI proficiency score sits in the candidate's profile alongside their other results, role requirements, and full application record, visible at the moment a hiring decision is being made. Not in a separate platform. Not requiring an export. Where it actually matters.

Conversational cognitive tasks will be available in Mochi soon.

What this means for your hiring

If your team is currently assessing AI capability through resume bullets and interview impressions, you are operating on the weakest possible signal for one of the most consequential hiring criteria in the market right now.

Maki's AI proficiency assessment gives you something different: a structured, science-backed, observable measure of how candidates actually perform with AI at work. Built for graduates, knowledge workers, and managers, the population that most hiring teams are actually trying to assess. Validated by a dedicated science team. And available today without disrupting a single thing in your existing process.

Hiring for AI capability without measuring it isn't a gap in your process. It's a bet you're placing every time you make an offer.

See what Maki Agents can do for you
Experience how Maki’s AI agents simplify, speed up, and elevate your hiring
Request a demo
See what Maki Agents can do for you

See what Maki Agents can do for you

Request a demo