This post was originally featured on HIStalk.
As Robert Cringley recently noted, computers empower unparalleled discrimination. Before insurance companies could calculate rates on an individualized basis, they calculated rates based on population pools. They simply didn’t have the computing power or prowess to discriminate at the individual level. As a result, the healthy financially supported the unhealthy by average premiums across population pools.
In the 1990s, the cost of computing fell to a point that payers could discriminate. So they did. Payers could easily identify patients that would incur high costs based on a relatively simple set of questions about one’s health. For many patients, payers were so concerned that healthcare costs would be so high that they’d prefer not to take on any risk at all and simply refuse to insure the patient. This has been the controversial norm for the better part of the last 15 years.
One of the most important clauses of the Affordable Care Act is that which mandates that payers cannot deny coverage for any reason. Payers must price that risk. In many cases, they expect that the costs of care will be so large that they are distributing those costs across their entire insured populations. Paul Levy recently noted that this is happening to such a degree that many healthy individuals are seeing their premiums increase under the Affordable Care Act.
This is direly ironic. Computers, the ultimate discriminatory tool, can no longer discriminate. Over the 15-20 years of the discriminatory cycle, healthcare costs have systematically outgrown the GDP as obesity has risen to become the #1 killer in the US. Coupled with the fact that no one is allowed to be uninsured (most of whom weren’t healthy to begin with), premiums are, on average, increasing for a substantial percentage of society.
Is this a classic case in which computers are eating healthcare, or are we as a society eating ourselves?