Thumbnail

How to Make Decisions Based On Limited Data in Informatics Projects

How to Make Decisions Based On Limited Data in Informatics Projects

In the fast-paced world of informatics, making decisions with limited data is a common challenge. This article explores practical strategies for navigating data gaps in various scenarios, from phased migrations to international SEO campaigns. Drawing on insights from industry experts, readers will discover how to leverage rapid testing, data triangulation, and strategic investments to drive success in data-limited environments.

  • Phased Migration Strategy Overcomes Data Gaps
  • Rapid Testing Clarifies Limited User Data
  • Calculated Bets Drive International SEO Success
  • Triangulating Data Sources Builds Investor Trust
  • Reframe Ad Spend as Data Acquisition Investment

Phased Migration Strategy Overcomes Data Gaps

During a SharePoint Online migration for a longtime client, we encountered a challenge: their on-premises file system was massive, and there was no reliable documentation on folder ownership or data sensitivity. We had terabytes of data and no metadata to rely on. The leadership team was pushing for a hard cutover, but without knowing who needed what, it felt like we were flying blind. I had to make a decision: either delay the project for a manual audit (which would cost time and money) or move forward with a phased migration strategy based on usage patterns and recent access logs.

I chose the latter. We extracted the last 90 days of file access data and started with what was actively in use, labeling the rest as "archive." It wasn't perfect, but it provided users with a soft landing and reduced immediate risk. I maintained tight communication—daily updates, clear rollback plans, and fast user feedback loops. In hindsight, that decision worked because we acknowledged the gaps upfront and planned around them instead of pretending the data was clean. It's a good reminder that sometimes, done with transparency beats perfect delayed indefinitely.

Rapid Testing Clarifies Limited User Data

I've been in that position more than once, especially in the early stages of building Zapiy.com. One situation that stands out was when we were developing one of our core automation features. We had limited usage data and only a handful of customer insights to guide us, but we had to make a decision quickly about whether to invest more resources into expanding the functionality or pivot in a different direction.

The data we had was mixed. Early adopters were giving positive feedback, but overall engagement metrics weren't as strong as we expected. It would have been easy to either overreact or freeze. But instead of chasing certainty, I leaned into clarity.

We took a step back and asked a simple question: what do we know for sure, and what do we need to believe for this feature to be worth scaling? We mapped out our assumptions, identified the biggest unknowns, and designed a rapid customer feedback loop to test them—fast. This meant conducting short-form interviews, watching real user behavior, and digging into support tickets to uncover friction points that data alone wasn't showing us.

It wasn't perfect, but it was actionable. The insights we gathered helped us realize that the concept was solid, but the user experience needed to be more intuitive. Rather than scrapping the feature or blindly doubling down, we invested in UX improvements and re-launched with clearer onboarding and more guided actions.

The results spoke for themselves. Usage went up, churn around that part of the product dropped, and the feature became a cornerstone of our offering.

What I learned through that process—and what I still lean on today—is that limited data doesn't have to mean limited clarity. You can still make high-quality decisions if you ask the right questions, get close to your users, and stay open to adjusting course. In informatics and in business, it's not about having perfect information—it's about having the right mindset to make the most of what you have.

Max Shak
Max ShakFounder/CEO, Zapiy

Calculated Bets Drive International SEO Success

Early in scaling Scale By SEO, we faced a critical decision about expanding into international markets with extremely limited data on search behavior patterns. Our client was losing money rapidly on failed campaigns, and we had only three weeks of incomplete analytics from their previous agency. I implemented a rapid-fire testing framework, allocating small budgets across different geographic regions while simultaneously conducting keyword research in multiple languages. The key was creating decision trees based on leading indicators rather than waiting for statistically significant data. We tracked micro-conversions, engagement patterns, and search query intent signals to build a predictive model. Within two weeks, we identified three high-potential markets and reallocated the entire budget accordingly. The campaign became their most profitable international expansion, generating 340% ROI in six months. Sometimes you have to make calculated bets based on directional data rather than perfect information. That's how visibility in search is achieved.

Triangulating Data Sources Builds Investor Trust

Absolutely. During an investor readiness project for a healthtech startup, we were tasked with validating their user engagement metrics—which they'd tracked using a very early-stage analytics tool. The problem was, the data was patchy at best. Some KPIs were updated inconsistently, and their churn calculation was based on assumptions rather than behaviorally sound logic. Normally, that would be a showstopper for any investor conversation. But we didn't have the luxury of waiting—they were already in conversations with VCs.

I decided we'd triangulate. Instead of relying on one dataset, we pulled in qualitative feedback from users, engagement from support tickets, and even timestamps from backend logs to infer active usage. One of our team members built a lightweight model using that mosaic of data to simulate what more complete metrics would likely look like. Not perfect, but defensible.

When presenting this to the investors, we didn't try to hide the gaps. Instead, we emphasized how the team had taken initiative to validate early traction, and laid out a plan to implement a proper data infrastructure post-raise. That honesty, paired with the resourcefulness of the workaround, actually built more trust. It reminded me that investors rarely expect perfection—but they always look for clarity and adaptability.

Niclas Schlopsna
Niclas SchlopsnaManaging Consultant and CEO, spectup

Reframe Ad Spend as Data Acquisition Investment

In paid media, we launch campaigns with incomplete data every single day. The biggest mistake I see is people trying to find the perfect answer in that initial void. They analyze proxy metrics excessively, attempting to guess the winner. This is the wrong approach. The real informatics project isn't analyzing the limited data you have. It's building a system to acquire the critical data you lack, as efficiently as possible.

We treat the initial ad budget as an investment in data acquisition, not sales generation. The most important decision we make is not which ad creative is best, but how much we are willing to spend to learn if an ad has potential. This turns a guessing game into a disciplined experiment. You have to be willing to pay for clarity. Once you reframe the problem that way, the path forward becomes much clearer.

Copyright © 2025 Featured. All rights reserved.
How to Make Decisions Based On Limited Data in Informatics Projects - Informatics Magazine