Skip links

The Hidden Cost of Convenience: Why Privacy Can’t be an Afterthought

In 2018, fitness tracking app Strava released a global “heatmap” feature showing where users most frequently ran, cycled, or exercised, based on GPS data collected from millions of devices. Intended as a fun and motivational visualization of global fitness activity, the heatmap inadvertently revealed something far more sensitive: the locations of military bases, patrol routes, and soldier movements around the world.

Because the app’s location-sharing feature was on by default, and because many users, particularly military personnel were unaware of this, highly sensitive data was publicly exposed. Remote military outposts in conflict zones lit up on the map, clearly outlining patterns of activity that could be exploited by hostile actors.

This incident didn’t stem from a data breach or a hack. It happened because privacy was not embedded into the product’s design and because the default settings favoured openness, not data protection. Strava’s heatmap is a cautionary tale, it shows how disregarding privacy may lead to a quick decline in user trust.1

Trust in the safety and security of a technology is a major factor in its acceptance by the people. Many people today may be using AI in one way or another, but they may not necessarily be complacent about the their safety and security of their data.2 This challenge can only be addressed by prioritising privacy and incorporating it instead of viewing it as a limitation on functionality. For example, when Apple introduced Apple Intelligence in 2024, it emphasized that many AI features such as content generation, summarization and intelligent suggestions would be processed on the user’s device. As opposed to cloud based models like those used by Google that transfer and store data remotely, Apple’s model ensures that sensitive information like emails, messages, photos remain on the user’s device and the risk to user’s privacy is minimized.3 For the tasks that cannot be completed on device, Apple has introduced “private cloud compute” that ensures that data is not associated with specific users and is not stored permanently but only used during the processing session.4 It claims that it does not use the user data to train its models but relies on synthetic data and differential privacy that involves injecting noise to the user data.5 It also allows independent third parties to verify their adherence to their privacy policies and guidelines.6 While Apple could demonstrate greater respect for privacy by ensuring that like many of its features such as location tracking, Apple Intelligence too should be opt – in instead of on by default, it still enjoys a competitive advantage and regulatory goodwill by embedding privacy.7

Interestingly, privacy by design is not a novel concept. It was introduced by Cavoukian in the 90’s as “the philosophy and methodology of embedding privacy into the design specifications of information technologies, business practices, and networked infrastructures as a core functionality.”8 There are seven foundational principles9 underlying the concept as defined by her, namely –

  • Proactive not reactive; preventative not remedial: requires proactive measures to prevent privacy harms from occurring.
  • Privacy as the default: wherever possible the individual must be delinked with their personal information. This concept also includes the principle that the collection of personal information must be lawful, fair and necessary for a specified purpose.
  • Privacy embedded into design: requires hat privacy must be integrated with the architecture of technology in a creative and holistic manner.
  • Full functionality – positive sum – not zero sum: there should not be any undesirable tradeoff’s between privacy and legitimate interests. Privacy and functionality must be seen as complementary to each other.
  • End to End Protection: privacy must be consistently protected throughout the lifecycle of personal information.
  • Visibility and transparency: requires that technology and business practices are operating as per specified promises and objectives, subject to independent verification.
  • Respect for user privacy: requires that user privacy must be prioritised and user friendly options must be provided to exercise control over personal information.

It is now a mandate under the EU General Data Protection Regulation that has set a high standard through Article 25. Following which, other jurisdictions have also begun to incorporate similar principles. For instance, Section 24 of the Singapore’s Personal Data Protection Act requires that organisations must implement reasonable security measures and data protection policies. PDPC Singapore’s Guide to Accountability10 and Guide to Developing Data Protection Management Programme11 encourage businesses to integrate privacy into their processes throughout the data lifecycle which is essentially a soft version of privacy by design. Similarly, though the California Privacy Rights Act and the California Consumer Privacy Act do not contain an explicit privacy by design mandate they contain mandates for opt – in mechanisms, data minimization, and regular risk assessments. India’s DPDPA also does not contain an explicit mandate of privacy by design and by default. However, it does require data fiduciaries to implement “reasonable security safeguards” to prevent personal data breaches under Section 8(5) and to only collect data which is “necessary for a specified purpose” under Section 6. These obligations imply a duty to embed privacy into data processing systems and decisions. Therefore, a systemic and proactive approach towards integrating privacy into the technological architecture would ensure compliance with DPDPA. Further, MeitY’s Report on AI Governance Guidelines Development emphasise upon the principles of transparency, accountability, robustness, privacy and security, and particularly digital by design governance.12 This would naturally include privacy by design and by default.

As India begins implementing the DPDPA, the Data Protection Board of India would have a major role in shaping how privacy is protected. A powerful step that it may take is to encourage the adoption of privacy by design and by default through its codes of practice. This would mean that privacy would be a part of how systems are built from the start and not something that is addressed when there is a complaint. It is especially important for protecting people who may not fully understand the risks of sharing their data, particularly children. It is also very important for processing that poses a high risk due to sensitivity of data or nature of its use. Further, as AI systems become more common in everything from credit scoring to hiring, privacy by design and by default can mitigate the risk of unfair decisions or misuse of data. Its adoption would therefore give people greater confidence that their data is handled responsibly.