Living the Promise of Big Data Part I

The promise of big data is potentially revolutionary for health care; it will ease the transition to authentic data-driven decision-making, enabling payers to more effectively understand the needs of specific groups of members and ultimately improve standards of care.  Practically speaking, however, the vastness of the data that health plans can access presents unique challenges in aggregation, applying information to a specific area of focus, and distilling meaning from it. 

The Affordable Care Act (ACA) is poised to influence care and efficiency, but many of the principal reform provisions are still evolving. Risk-adjusted payment – a means of incenting care providers for producing better outcomes at lower costs, which requires big data analytics capabilities – is an important feature on the ACA “work-in-progress” landscape.

The Centers for Medicare & Medicaid Services (CMS) and Medicare Advantage plans have deployed and continually refined data-based risk science methodologies for more than a decade. The industry’s constantly advancing risk adjustment aptitude increases the likelihood of a high-functioning market where insurers can compete confidently on service and quality.

A comprehensive analytics approach presents a framework for helping plans to clinically evaluate members, collaborate with providers more strategically, and apply mathematical models to help us project which medical charts are likely to return new information about a patient.  These effective chart-selection models help gauge the impact of efforts to improve clinical effectiveness, increase outcome-centric care management and ensure cost efficiency, all while optimizing payer risk-adjusted revenue.

There are three steps in developing an analytics’ approach: (1) access data sources; (2) aggregate and stratify the data; and (3) model the data.  In this piece we will focus on the first step and how to evaluate and analyze data sets.

Accessing Data Sources

The most effective clinical analytics framework includes data in the following categories:

  • Population health benchmarks
  • CMS baseline data
  • Member data (e.g., lab, claims, pharmacy, demographic)
  • Provider data

In analyzing these data sets, plans need to consider a comprehensive view of the member – with community data and market expertise providing context. In addition to obtaining access to large data sets, this work requires clinical and market expertise to gain meaningful insights from the data and create a strategic foundation for member and provider engagement. 

How do organizations leverage these data sets?

  • CMS baseline data provides an effective starting point for risk adjustment. However, baseline data – by definition – constructs only a partial member profile relative to what is realistically within reach. Lesser detail in CMS data, a lag in reporting, and the retrospective nature of capturing or reconfirming previously documented conditions restrict the ability to predict diagnoses not yet indicated by a provider.

Implementing more robust, forward-focused risk analytics that facilitate concurrent and prospective modeling of member profiles and care costs involves collecting, managing and analyzing massive sets of constantly updated, population-scale data. The clinical and administrative data that feeds comprehensive clinical analytics derives from a significantly wider inventory of sources.

  • Population health benchmarks constructed from community-based clinical information – and combined with specific member information from CMS and the health plan – enable a risk management system to identify both known and suspected medical conditions. The outcome is a predictive method to build clinically based member profiles that respond dynamically to new information. Comparing health plan data to a valid population benchmark or baseline makes it easier, for example, to isolate outliers, build explicit and complete member risk profiles, and pinpoint clinical care gaps with specificity.

By virtue of working with various Medicare Advantage and dual-eligible populations that total more than 3.5 million members, Optum accesses and leverages claims, patient records and other profile inputs in volumes essential to distilling high-integrity benchmarks and baselines. Once again, however, data records in the millions or billions do not translate automatically to benchmarking integrity. The realities of multiple disparate health plans, claim systems and third-party service providers – all with unique data requirements and limitations – complicate building accurate baselines.

Step 2: Aggregate and Stratify the Data

Clinical data and market insights are aggregated and stratified to create actionable care management plans for each member. These plans encompass prospective and retrospective services that help ensure that members are receiving the appropriate care; providers have appropriate member information at the point of care; and health plans receive the appropriate information to document member conditions. These management plans are characterized by the following:

  • Statistically supported – Using the latest techniques in applied statistics and probability assists in the  identification of the program(s) most likely to close care gaps and confirm suspected conditions
  • Responsive – The flexibility built into the model allows members to progress through the spectrum of care as their needs evolve. The member transitions between programs as indicated in the data
  • Comprehensive reporting – Detailed analytics drives the ongoing training and assessment of the model. As results come in, that data is used in suspect identification and targeting to strengthen future results.

Step 3: Model the Data

Front-end, proactive identification of Medicare Advantage members at-risk of developing chronic diseases requires rules-driven logic based on predictive models built from the comprehensive clinical and administrative data sets described above. Stratifying members according to clinical risk profiles and disease types leverages predictive signs and risk factors that indicate likely conditions not yet appearing on claims.  

Critical advances in risk management analytics highlight the value in a significantly more robust statistical logic to plan and manage best-possible care. Aggregating and stratifying clinical data and market insights creates actionable, member-specific management plans. The resulting care planning outcomes reflect prospective and retrospective risk adjustment programs that help ensure:

  • Members receive appropriate care.
  • Providers have appropriate member information at the point of care.
  • Health plans receive appropriate information to document member conditions.

The potential in risk adjustment, traditionally one of the least understood components of health care, has earned the heightened visibility attached to ACA-driven reform to control costs by reducing adverse member selection. Ideally, the new awareness will also renew and intensify payer engagement in applying optimized risk adjustment analytics to close care quality and utilization gaps. This is particularly evident for the chronic-care, high-cost Medicare reimbursement environment, where decreased payment rates make it essential to accurately represent health plan risk scores.

–Don James, Director of Product Management, Risk Adjustment Solutions at Optum

One thought on “Living the Promise of Big Data Part I

  1. Pingback: Living the Promise of Big Data Part II | Healthcare Exchange

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s