Worked Example: Prioritizing AI Investments
Walk through a complete answer to 'How would you prioritize AI investments for a mid-size SaaS company?' using impact-effort-risk frameworks.
The Question
Here is the question: 'You are a Group PM at a mid-size SaaS company with $100M ARR. The CEO has allocated $5M for AI investments this year. You have five potential AI projects, each with different costs, timelines, and expected impacts. How would you prioritize?' This question tests your ability to create a structured prioritization framework that accounts for the unique characteristics of AI investments.
The trap is using a generic prioritization framework (RICE, ICE) without adapting it for AI-specific factors. AI projects have unique risk profiles: data dependencies, model uncertainty, and longer development cycles with less predictable outcomes.
Worked Answer: Framework Setup
"I would use a modified RICE framework adapted for AI investments. Standard RICE evaluates Reach, Impact, Confidence, and Effort. For AI projects, I would modify it to include two additional dimensions: Data Readiness and Technical Risk. I call this RICE-DT."
"Reach: How many customers or users does this AI feature affect? An AI feature that improves search for all users has higher reach than an AI feature that optimizes billing for enterprise customers only. Impact: What is the expected magnitude of improvement? I measure this in revenue impact (new revenue or reduced churn) and operational efficiency (cost saved). Confidence: How confident are we in the impact estimate? For AI projects, this is usually lower than for traditional feature development because model performance is uncertain until you build and evaluate it. I score Confidence on a 3-point scale: High (similar feature has been proven at other companies), Medium (the approach is sound but unproven for our use case), Low (novel approach with significant uncertainty)."
"Effort: Total cost including ML engineering time, data engineering time, infrastructure, and ongoing maintenance. AI projects have higher ongoing costs than traditional features because models require monitoring, retraining, and data pipeline maintenance. Data Readiness: Is the data we need available, clean, and labeled? A score of High means the data exists and is accessible. Medium means the data exists but needs cleaning or labeling. Low means we need to collect new data, which adds 3-6 months to the timeline. Technical Risk: How likely is it that the AI approach works for this problem? High risk means the state of the art for this task is immature. Low risk means there are proven approaches with known performance benchmarks."
[Interviewer note: The RICE-DT adaptation is well-reasoned. Data Readiness and Technical Risk are the two factors that most distinguish AI project prioritization from standard feature prioritization. The candidate defined each dimension clearly with AI-specific considerations.]
Worked Answer: Applying the Framework
"Let me apply this to five hypothetical projects with the $5M budget. Project A: AI-powered customer support chatbot. Reach: High (all customers contact support). Impact: Medium ($800K annual cost reduction). Confidence: High (chatbots are proven). Effort: $800K. Data Readiness: High (we have years of support tickets). Technical Risk: Low. RICE-DT score: 8.4/10."
"Project B: Predictive churn model. Reach: High (all customers). Impact: High ($2M revenue protection if we reduce churn by 5%). Confidence: Medium (depends on signal quality in our data). Effort: $600K. Data Readiness: Medium (usage data exists but needs feature engineering). Technical Risk: Low. RICE-DT score: 7.8/10."
"Project C: AI-generated product analytics dashboards. Reach: Medium (power users only). Impact: Medium ($500K in reduced analyst headcount). Confidence: Medium. Effort: $1M. Data Readiness: High. Technical Risk: Medium (natural language to SQL is improving but has accuracy limitations). RICE-DT score: 5.6/10."
"Project D: AI-powered onboarding personalization. Reach: Medium (new customers only). Impact: High ($1.5M from improved activation rate). Confidence: Low (novel for our context). Effort: $1.2M. Data Readiness: Low (need to instrument new user behavior data). Technical Risk: Medium. RICE-DT score: 4.8/10."
"Project E: AI feature for predictive lead scoring. Reach: Low (sales team only). Impact: High ($1.8M from improved sales efficiency). Confidence: High (proven approach). Effort: $400K. Data Readiness: Medium (CRM data exists but needs integration). Technical Risk: Low. RICE-DT score: 7.2/10."
[Interviewer note: The candidate scored five distinct projects with enough specificity to be credible, including realistic cost and impact estimates. The scoring captures the key tradeoffs between projects. This level of quantitative rigor in an interview answer is impressive.]
Worked Answer: Recommendation and Sequencing
"Based on RICE-DT scores, my prioritization is: A (chatbot, $800K), then B (churn prediction, $600K), then E (lead scoring, $400K), then D (onboarding personalization, $1.2M). Total: $3M. I would hold $2M in reserve for two reasons."
"First, AI projects frequently run over budget. Models do not perform as expected, data quality issues emerge, and iteration cycles are longer than planned. A 40% contingency is prudent for a first year of serious AI investment. Second, the reserve funds Project D's data instrumentation. I would spend $200K now to instrument the new user behavior data, so that by Q3, we have the data foundation to evaluate whether Project D is worth the full investment."
"The sequencing matters as much as the prioritization. I would start A and E simultaneously (they require different teams and have no dependencies). B starts in month 2, after we scope the feature engineering work. This sequencing means we ship our first AI feature (lead scoring) in Q1, the chatbot in Q2, and churn prediction in Q3. Each launch builds organizational confidence and generates learning that informs later projects."
"The key metric I would report to the CEO quarterly: cumulative ROI of AI investments (revenue impact + cost savings) versus spend. The target for year 1 is 2x return on the $3M invested, with the understanding that the churn prediction model may not show full impact until year 2. I would also track time-to-value for each project (weeks from kickoff to measurable impact) as an internal efficiency metric."
[Interviewer note: The budget reserve and data instrumentation for Project D show strong strategic thinking. The sequencing rationale is practical: build organizational capability progressively. The reporting metrics are appropriate for a CEO audience. Overall score: 4.5/5.]
Key Takeaways
- Adapt standard prioritization frameworks for AI by adding Data Readiness and Technical Risk dimensions
- AI projects have higher ongoing costs than traditional features. Factor in monitoring, retraining, and data pipeline maintenance
- Hold budget reserve (30-40%) in the first year of AI investment. Projects routinely run over due to data and model iteration
- Sequence projects to build organizational AI capability progressively. Early wins build confidence for larger investments
- Report AI investment ROI quarterly with cumulative revenue impact vs. spend. Set realistic timelines: some AI investments take 6-12 months to show full impact