Concerns regarding consumers’ data privacy have been a priority in the technology sector of healthcare for quite some time. There is such an incredible amount of information that is constantly being transferred and accessible to consumers, and simultaneously being obtained from end-users. This is of major concern when it comes to personal health data.
Information is continuously being shared automatically and actively with applications that many people would not choose to share with even their close family or friends. This information is repackaged and often sold for profit.
Although health information feels like sensitive information, it is only explicitly protected by federal regulations if it is collected by insurance companies, hospitals, pharmacies, or physicians’ offices. Data entered manually into an app, or allowing an app to track your steps, estimate heart rate, etc. is not protected by such regulations.
Rather, this protection is dependent on loose regulations, which are vaguely worded and difficult to enforce.
Although this is not necessarily a new issue, the Federal Trade Commission (FTC) has a renewed vigor in investigating and prosecuting companies who are intentionally deceptive regarding the collection, storage, and use of sensitive data from individual users.
This issue has been brought to light due to the overturning of Roe vs. Wade. Concern has been expressed about the ability to track and collect information about individuals who are researching abortion resources or traveling to health clinics where abortions are performed.
In 2015-2016, Copley Advertising LLC was found to be using “geofencing” technology to set up boundaries around health clinics that performed abortions.
The advertising company would then send specific ads from their clients to individuals’ smartphone devices if they visited the clinics. The ads were generally for abortion alternatives.
This is not the first-time consumer data has been used in an intrusive and inappropriate manner.
In light of the recent court decision, this concern is again brought to the forefront. Although there have been some improvements in the oversight of commercial surveillance since the Copley Advertising incident, this aspect of the data economy is rapidly changing.
Companies continue to undergo digital transformations and many are shifting their infrastructure to hybrid or cloud-based frameworks. The continuation of breaches in consumers’ personal data highlights how loosely the data sharing economy is regulated and the inconsistencies with which the regulations are enforced.
The FTC plays a substantial role in investigating, and if necessary, suing, companies that have unfair or intentionally misleading data security practices
The FTC aims to protect consumers and their data privacy, which falls under Section 5 of the FTC Act prohibiting “unfair or deceptive acts or practices in or affecting commerce.”
Additionally, the FTC also enforces the Health Breach Notification Rule, which specifically dictates that if any personal health information was unsecured, the consumer must be notified. This rule is similar to the Health Insurance Portability and Accountability Act (HIPAA) which is a federal statute that improves health data security and privacy, and ensures consumers are informed of any breaches regarding their sensitive health data.
However, HIPAA does not apply to organizations such as health apps, if the information has been ‘provided’ by the individual. Rather, HIPAA focuses specifically on health plans, health insurance, hospitals, outpatient care, and pharmacies.
Despite the FTC’s renewed vigor in investigating unfair data collection, there remains several concerns regarding how the consumer’s data is collected, stored, and used.
One-way companies use data deceptively is by promising anonymity. Consumers are assured that any individual identifying information is removed from any aggregate data collected before it is used for research purposes or sold to other companies. In reality, it has been proven to be a simple task to reverse this anonymity process, and map the data back to the original source.
Research has shown how data can be traced back to the individual user using several patient demographics. Not only does this not meet the current standards for anonymity, but it is also intentionally deceptive to consumers.
Companies may also make misleading claims regarding how they obtain and use data. Often end-users are not aware of how their information is being used, and especially how it is repackaged, reorganized, and then sold for profit to data brokers.
These data brokerage companies typically have no connection with individual users, or even the original app or site from which the data was obtained. So, individuals are not aware of exactly where their information is going and who has access to it.
Another concern facing today’s consumers is that many companies have been found to over-collect data and retain it for far longer than is deemed necessary. Some of this data collection occurs automatically, especially location information, with personal devices such as phones, watches, and tracking devices found in vehicles.
Anytime a device is in use, it is also creating a map of the user’s location, including their home, work, and where they seek health services. Health data can also be user-generated, such as health information that is manually entered into certain websites or applications.
For example, health apps like fertility and pregnancy trackers, weight loss apps, and fitness trackers involve consumers actively answering questions and logging their information on a continuous basis.
The underlying trust is that the information they are entering will be kept anonymous, private, and only used for the application functionality. This is not the case with many different health apps.
Rather, they are specifically designed to profit from the research resulting from the data collected or the aggregate data itself.
Recently Flo Health, a fertility app, settled with the allegations made by the FTC that the company “misled consumers about the disclosure of their health data.”
The app, which claims to have more than 100 million users, collects information regarding the consumer’s overall health in addition to the menstrual cycle and pregnancy information entered by the consumer.
It then uses this information to help track fertility and pregnancy progress. The Flo app assured users that their fertility and pregnancy information would be secure and only used within the app’s typical functions.
However, allegations were made that the data was used for analytics and marketing due to poor security infrastructure.
As a result, the consumer data was shared with other companies and their analytics divisions, which included major search engines and social media platforms.
Flo Health did not put any restrictions on how these outside analytic departments could use such sensitive data. This data sharing continued until numerous users issued complaints following a 2019 Wall Street Journal article that exposed that Flo Health had been ‘misusing sensitive data’.
Flo’s settlement did not include an admission of guilt against their privacy practice. This is just one of several apps that consumers use to track fertility, pregnancy, weight loss, and overall health that have been found to have shared data for profit.
Often, privacy practices can be misleading and vague in their description of how the data will be utilized. An empirical study was performed to learn about the private data being shared on mental health apps.
The study found that users were routinely encouraged to share health information on an online platform. They estimate that 41% of the mental health apps they examined did not even have privacy policies available for the user to access readily.
Another study of 600 mobile health apps confirmed these findings. This study found that only roughly 31% of apps had privacy policies available to the user, and those policies were severely lacking in transparency and required a high level of literacy to understand.
Companies that handle sensitive health data should have to re-examine their security framework.
If not already in place, shift to a zero trust model. Perimeter security is not sufficient for increasingly hybrid and remote business practices. The zero-trust model is the best option to optimize security measures for each individual user and every device.
This model operates under the assumption that there exists a potential security concern with every login and every connection. Rather than all the data being accessible once on a secured network, such as in the perimeter security model, the data is only able to be retrieved if specific conditions are met.
It is otherwise completely secured. Each user, verified by multifactor authentication, should be able to access only what is necessary for their current task. Each data set should be classified by its specific risk and limited access accordingly.
If this model had been sufficiently employed at Flo Health, the sensitive data would more than likely not have been accessible to their marketing team. This would have greatly reduced the chance of this type of personal information being shared with third-party companies.
The National Institute of Standards and Technology (NIST) published a document providing information on the core components and how to establish a zero trust architecture.
It specifically addresses remote workers and cloud-based networks. Following this type of security structure would greatly improve health data privacy and limit inadvertent lapses in sensitive data security.
However, a concern would still remain for those companies that profit from such data collection is an essential part of their business model.
In addition to companies restructuring the way they approach security to assume every interaction has the potential for a breach, the FTC and similar agencies should make consequences a reality.
Although the FTC has the ability to prosecute for sensitive data sharing violations, it does not consistently exercise this strength. Some health organizations have faced fines for data security breaches. For many users, this felt more like a warning rather than a significant penalty for such intrusive data sharing.
Many believe this approach will not deter similar apps from continuing to sell user data. As we undergo a digital transformation and enter into a time in the workforce where many users are accessing a shared network from across the globe, the FTC should strongly consider prosecuting violations involving personal health information.
There are federal regulations being discussed that would add sought-after regulation over the data economy and commercial surveillance. The American Data Privacy and Protection Act (ADPPA) is now headed to the House of Representatives for discussion.
This bill limits data collection to “what is reasonably necessary, proportionate, and limited to provide specific products and services requested by individuals.” It aims to make basic sensitive data privacy a right of all citizens.
It will also make companies explicitly ask permission from users before obtaining any data. If passed, the ADPPA would fall under the FTC to enforce, but it would provide additional regulations that apply to all people when carrying out investigations.
Sensitive data collection is a significant concern that is not going away anytime soon as both personal and professional lives become increasingly centered around technology and data sharing.
There are steps companies can take to protect their infrastructure from security breach incidents. On a federal level, an increase in investigations and penalties for using consumer data deceptively is expected.
In the meantime, individual users must do their due diligence to protect their own information and fully understand the implications behind privacy policies for any company that possesses their personal health data.