The Three Little Pigs, medical records coding and who controls the narrative

December 13th, 2019 / By Rhonda Butler

One of the many wonderful things about the post-Google information age is that, with no appreciable time or effort, you can pose a question and see what’s out there. The other day, for reasons I don’t even want to know, I was wondering whether the story of “The Three Little Pigs” has a moral like Aesop’s Fables. So, I just opened a window in my usual search engine, typed “three little pigs moral,” and voila!

Hard work and dedication pay off.

At first glance, the search results seemed to generally agree on the moral, with only minor tweaks in the emphasis, like “hard work pays off, while laziness has consequences.” A second look showed lots of sites deriving other meaning from the story of the three little pigs. Here are a few examples:

  • Liveinnanny.com offers “10 lessons Kids Can Learn from ‘The Three Little Pigs’”—maybe because each day seems like ten days when you’re a live-in nanny.
  • Shortstoriesshort.com borrows the metaphor of building on a solid foundation in the three little pigs and adapts it to teach a concept in Christianity.
  • Bartleby.com goes meta and discusses the three little pigs as an example of the practice of using fairy tales to teach morals to children.

There were lots of personal, alternative morals for “The Three Little Pigs” (often called “the real lesson”) by individual bloggers. Also, hip versions for adults, and at least one person on LinkedIn who claims that fairy tales like “The Three Little Pigs” are an excellent subject for Harvard Business Review case studies.

I know—I am well into this blog and so far “medical records coding” is only in the title. What on earth is the connection between coding and all these versions of “The Three Little Pigs”? Just this: humans tell stories; to a large extent, stories are how we communicate. We translate events in the physical world into narrative form as a way of managing the volume and complexity of detail that bombards us. Every day in our work and personal life, we create and listen to stories: traffic stories, travel stories, gossip, success stories, lessons-learned stories. And, depending on _X__ (and this is a big, big X), we can emphasize certain aspects that we deem important, and use that story to arrive at a different lesson/summary/point. This flexibility in creating/interpreting a narrative is inseparable from human culture.

Coded records are a type of story. Medical records coding starts with a large volume and rich complexity of detail about a patient’s encounter with the healthcare system. A coder uses pre-fab “sentences”—diagnosis and procedure codes—and follows a whole host of rules in order to create a narrative that captures and preserves what has been deemed most important about the encounter.

The whole enterprise of getting what is deemed most important from a medical record and presenting it in a pithy but structured form works…sort of. A couple of the biggest places where it doesn’t work are unavoidable, because they are inseparable from being human:

  1. Diversity of interpretation is such a strong human tendency that to mitigate it there are gobs of rules and guidelines for coding medical records. Without those coding guidelines, coders would inevitably emphasize different aspects of the encounter and come up with a very different “lesson”—a different principal/first listed diagnosis, DRG, HCC, quality score and all the rest. Even with the rules and guidelines, variability in coding is well known: three coders might code the same chart three different ways; one coder might code the same chart differently on a different day.
  2. Those of us who develop the coding systems and coding guidelines that attempt to populate the narrative with relevant details. We are also the ones who design hierarchical structures like DRGs and HCCs, that derive from the coded record estimates of resources and risk that are emphasized in the codes. Designing these systems and structures requires a whole lot of decision making about what is important in a medical record. Asking a group of people to agree on what is important is not that fun. (Consider every war ever fought over religion, for starters.) It is typically a slow, painful process, full of compromises and muddle. Consensus is more often than not an agreed upon fiction that passes for consensus, with final results that no individual really loves.

One of the attractive things about training machines to learn from human-designed starter algorithms and then “discover” what is important, is that it takes the “we” out of the iterative task of interpretation of rules. Instead, it puts more responsibility on the interaction of algorithms to extract and prioritize the details that will form the narrative. Does that mean we still control the narrative?

These are interesting times in health care. We are seeing all kinds of big tech interest in doing stuff with a patient’s unstructured healthcare data, at the same time we are using very old school, hierarchical systems like DRGs and diagnosis/procedure codes to create/constrain a simplified narrative of a patient encounter. Structured data produced using this “old school” method still keeps the lights on in the healthcare system to a large extent. Models that work differently are so far hovering at the edges, as proposals and pilot projects.

Unlike Aesop or the Brothers Grimm, this blog does not end with any answers. I have only more questions. To the extent that algorithms are increasingly involved in choosing what to extract from raw patient data, find what is important and pass it on, who controls that narrative? To the extent that humans write the initial algorithms, does that mean we do, still? How much control is desirable, or even possible? Will there be variability in the resulting data that is similar to the “different day, different opinion” tendencies of human thinking, or will there be a self-reinforcing (and therefore inaccurate, because exaggerated) “binary-ness” in the interpretation of data, a polarizing tendency with consequences for our assignment of risks and resources?

Rhonda Butler is a clinical research manager with 3M Health Information Systems.