Posts Tagged ‘Federal Trade Commission (FTC)’
The Federal Trade Commission has proposed revisions that will bring the Children’s Online Privacy Protection Act in line with 21st century technology, largely targeting social networks and online advertisers.
By Alice Cheng
Based on comments solicited last year, the Federal Trade Commission (FTC) has posted proposed revisions to the Children’s Online Privacy Protection Act (COPPA). The Act, which has not been updated since its inception in 1998, may be extended to include social networks and online advertisers.
According to the current regulations, COPPA applies only to website operators who know or have reason to know that users are under the age of 13, requiring the sites to obtain parental consent before any collection of data. In the past decade, an increased ability to harvest consumer information has necessitated revisions. In a FTC staff report conducted earlier this year, the Commission addressed a growing need for app stores and app developers to provide more information regarding their data collection practices to parents. With the proposed changes posted today, the FTC plans to update COPPA to respond to modern concerns surrounding social networking sites, advertising networks, and applications. Under the proposed changes, such third parties may be held responsible for unlawful data collection practices when they know or have reason to know that they are connecting to children’s websites. Mixed audience websites may have to screen all visitors in order for COPPA regulations to apply to users under 13 years of age. Additionally, restrictions on advertising based on children’s online activity may be tightened.
The FTC will be accepting public comment to the proposed rules via the FTC website. Comments will be accepted until September 10, 2012.
Several House lawmakers have sent letters to nine major data broker firms, seeking transparency on data practices.
By Alice Cheng
Last week, eight House members, including Congressional Bi-Partisan Privacy Caucus chairmen Ed Markey (D-Mass.) and Joe Barton (R-Tex.), sent letters to nine major data broker firms, asking for information on how they collect, assemble, maintain, and sell consumer information to third parties.
The letter references a recent New York Times article profiling data broker Acxiom, which may have spurred the lawmakers’ decision to target the firms. Data brokers are large firms that aggregate information about hundreds of millions of consumers, selling them to third parties for marketing, advertising, and other purposes. Oftentimes, profiles of consumers are created to reflect spending habits, political affiliation, and other behavioral information. As the article explains, the issue with these activities is that they are largely unregulated, largely unknown to the general public, and are often be difficult to opt out of.
Privacy advocates, lawmakers, and often the Federal Trade Commission have made continued moves towards increased transparency of the activities of data brokers. A statement explains that, in sending the letter to the nine firms, the lawmakers in the Bi-Partisan Privacy Caucus seek to obtain information on the brokers relating to “privacy, transparency and consumer notification, including as they relate to children and teens.”
Survey finds that only 61.3% of apps have privacy policies, reflecting perceived need for increased app privacy regulations.
By Alice Cheng
The FPF credits the consumer privacy efforts of various groups, including the Federal Trade Commission and the California Attorney General. The FTC has made continuous efforts to develop companies develop best consumer privacy practices, and has been involved in battling privacy violations. In February, California Attorney General Kamala Harris persuaded six major companies with mobile platforms (including Apple, Microsoft, and Google) to ensure that app developers include privacy policies that comply with the California Online Privacy Protection Act. More recently, Harris also announced the formation of the Privacy Enforcement and Protection Unit to oversee privacy issues and to ensure that companies are in compliance with the state’s privacy laws.
The Federal Trade Commission fined an online data broker who allegedly sold consumer reports containing internet and social media data in the context of employment screenings without adhering to the Fair Credit Reporting Act’s consumer protections.
By Alice Cheng
Data broker Spokeo recently agreed to pay $800,000 to settle Federal Trade Commission (FTC) charges in what is the FTC’s first Fair Credit Reporting Act (FCRA) case involving the “sale of internet and social media data in the employment screening context.”
Spokeo, a social network aggregator website, has long been notorious for the comprehensive profiles (including name, address, email address, phone number, hobbies, ethnicity, religion, etc.) it compiles and sells to third parties. Personal information of individuals is collected both online and offline, and profiles have been used for employment screening purposes—a practice that the FTC has alleged is in violation of the FCRA.
The FTC recently took legal action against the company after receiving an initial complaint about its practices from the Center of Democracy & Technology in 2010. The FCRA violations include failing to make sure that the information was sold for legally permissible uses only, failing to ensure that the information was accurate, and failing to notify users of the consumer reports about their obligations under FCRA.
The FCRA is a federal law regulating the collection, dissemination, and use of consumer information (including consumer credit information) to promote the accuracy, fairness, and privacy of such information. In order to avoid violating FCRA regulations, Spokeo says it will no longer build “consumer reports” and will no longer sell its information for employment screening purposes.
Aside from potential FCRA violations, such widespread collection of data by data aggregators like Spokeo continues to be an ongoing privacy issue. The collection of personally identifiable information, such as social security numbers or driver’s license numbers, carry obvious concerns, but even the collection of “non-sensitive” information can be problematic. Aggregation of this data is commonly used by advertisers to target prospective customers, or as in Spokeo’s case, sold to any willing buyers. While it may not always be easy to pinpoint any concrete harm to consumers, many are nevertheless uneasy about such compilations.
While the FTC has been increasingly vigilant regarding big data concerns, little progress is being made in developing data protection regulations. Continual changes in technology, such as the move to cloud computing services, may also invite further complications to developing appropriate regulations. Consumers need to be aware of what information is being collected and how it is used. Businesses need to be aware of what laws, rules and regulations govern their collection and use of information so they can assure successful compliance.
Who Owns Your Data and What Can They Do With It? Understanding Data Privacy and Information Security in the CloudTuesday, May 29th, 2012
“Cloud” Technology Offers Flexibility, Reduced Costs, Ease of Access to Information, But Presents Security, Privacy and Regulatory Concerns
With the recent introduction of Google Drive, cloud computing services are garnering increased attention from entities looking to more efficiently store data. Specifically, using the “cloud” is attractive due to its reduced cost, ease of use, mobility and flexibility, each of which can offer tremendous competitive benefits to businesses. Cloud computing refers to the practice of storing data on remote servers, as opposed to on local computers, and is used for everything from personal webmail to hosted solutions where all of a company’s files and other resources are stored remotely. As convenient as cloud computing is, it is important to remember that these benefits may come with significant legal risk, given the privacy and data protection issues inherent in the use of cloud computing. Accordingly, it is important to check your cloud computing contracts carefully to ensure that your legal exposure is minimized in the event of a data breach or other security incident.
Cloud computing allows companies convenient, remote access to their networks, servers and other technology resources, regardless of location, thereby creating “virtual offices” which allow employees remote access to their files and data which is identical in scope the access which they have in the office. The cloud offers companies flexibility and scalability, enabling them to pool and allocate information technology resources as needed, by using the minimum amount of physical IT resources necessary to service demand. These hosted solutions enable users to easily add or remove additional storage or processing capacity as needed to accommodate fluctuating business needs. By utilizing only the resources necessary at any given point, cloud computing can provide significant cost savings, which makes the model especially attractive to small and medium-sized businesses. However, the rush to use cloud computing services due to its various efficiencies often comes at the expense of data privacy and security concerns.
The laws that govern cloud computing are (perhaps somewhat counterintuitively) geographically based on the physical location of the cloud provider’s servers, rather than the location of the company whose information is being stored. American state and federal laws concerning data privacy and security tend to vary while servers in Europe are subject to more comprehensive (and often more stringent) privacy laws. However, this may change, as theFederal Trade Commission (FTC) has been investigating the privacy and security implications of cloud computing as well.
In addition to location-based considerations, companies expose themselves to potentially significant liability depending on the types of information stored in the cloud. Federal, state and international laws all govern the storage, use and protection of certain types of personally identifiable information and protected health information. For example, the Massachusetts Data Security Regulations require all entities that own or license personal information of Massachusetts residents to ensure appropriate physical, administrative and technical safeguards for their personal information (regardless of where the companies are physically located), with fines of up to $5,000 per incident of non-compliance. That means that the companies are directly responsible for the actions of their cloud computing service provider. Aaron Messing, an information privacy and technology attorney at OlenderFeldman LLP, notes that some information is inappropriate for storage in the cloud without proper precautions. “We strongly recommend against storing any type of personally identifiable information, such as birth dates or social security numbers in the cloud. Similarly, sensitive information such as financial records, medical records and confidential legal files should not be stored in the cloud where possible,” he says, “unless it is encrypted or otherwise protected.” In fact, even a data breach related to non-sensitive information can have serious adverse effects on a company’s bottom line and, perhaps more distressing, its public perception.
Additionally, the information your company stores in the cloud will also be affected by the rules set forth in the privacy policies and terms of service of your cloud provider. Although these terms may seem like legal boilerplate, they may very well form a binding contract which you are presumed to have read and consented to. Accordingly, it is extremely important to have a grasp of what is permitted and required by your cloud provider’s privacy policies and terms of service. For example, the privacy policies and terms of service will dictate whether your cloud service provider is a data processing agent, which will only process data on your behalf or a data controller, which has the right to use the data for its own purposes as well. Notwithstanding the terms of your agreement, if the service is being provided for free, you can safely presume that the cloud provider is a data controller who will analyze and process the data for its own benefit, such as to serve you ads.
Regardless, when sharing data with cloud service providers (or any other third party service providers)), it is important to obligate third parties to process data in accordance with applicable law, as well as your company’s specific instructions — especially when the information is personally identifiable or sensitive in nature. This is particularly important because in addition to the loss of goodwill, most data privacy and security laws hold companies, rather than service providers, responsible for compliance with those laws. That means that your company needs to ensure the data’s security, regardless of whether it’s in a third party’s (the cloud providers) control. It is important for a company to agree with the cloud provider as to the appropriate level of security for the data being hosted. Christian Jensen, a litigation attorney at OlenderFeldman LLP, recommends contractually binding third parties to comply with applicable data protection laws, especially where the law places the ultimate liability on you. “Determine what security measures your vendor employs to protect data,” suggests Jensen. “Ensure that access to data is properly restricted to the appropriate users.” Jensen notes that since data protection laws generally do not specify the levels of commercial liability, it is important to ensure that your contract with your service providers allocates risk via indemnification clauses, limitation of liabilities and warranties. Businesses should reserve the right to audit the cloud service provider’s data security and information privacy compliance measures as well in order to verify that the third party providers are adhering to its stated privacy policies and terms of service. Such audits can be carried out by an independent third party auditor, where necessary.
Today, the Federal Trade Commission (FTC) issued a final report setting forth best practices for businesses to protect the privacy of American consumers and give them greater control over the collection and use of their personal data, entitled “Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers.” The FTC also issued a brief new video explaining the FTC’s positions. Here are the key take-aways from the final report:
- Privacy by Design. Companies should incorporate privacy protections in developing their products, and in their everyday business practices. These include reasonable security for consumer data, limited collection and retention of such data, and reasonable procedures to ensure that such data is accurate;
- Simplified Choice. Companies should give consumers the option to decide what information is shared about them, and with whom. Companies should also give consumers that choice at a time and in a context that matters to people, although choice need not be provided for certain “commonly accepted practices” that the consumer would expect.
- Do Not Track. Companies should include a Do-Not-Track mechanism that would provide a simple, easy way for consumers to control the tracking of their online activities.
- Increased Transparency. Companies should disclose details about their collection and use of consumers’ information, and provide consumers access to the data collected about them.
- Small Businesses Exempt. The above restrictions do not apply to companies who collect only non-sensitive data from fewer than 5,000 consumers a year, provided they don’t share the data with third parties.
Interestingly, the FTC’s focus on consumer unfairness, rather than consumer deception, was something that FTC Commissioner Julie Brill hinted to me when we discussed overreaching privacy policies and terms of service at Fordham University’s Big Data, Big Issues symposium earlier this month.
If businesses want to minimize the chances of finding themselves the subject of an FTC investigation, they should be prepared to follow these best practices. If you have any questions about what the FTC’s guidelines mean for your business, please feel free to contact us.
By Aaron Messing
I will be speaking at SES New York 2012 conference about emerging legal issues in search engine optimization and online behavioral advertising. The panel will discuss Legal Considerations for Search & Social in Regulated Industries:
Search in Regulated Industries
Legal Considerations for Search & Social in Regulated Industries
Programmed by: Chris Boggs
Since FDA letters to pharmaceutical companies began arriving in 2009, and with constantly increasing scrutiny towards online marketing, many regulated industries have been forced to look for ways to modify their legal terms for marketing and partnering with agencies and other 3rd party vendors. This session will address the following:
- Legal rules for regulated industries such as Healthcare/Pharmaceutical, Financial Services, and B2B, B2G
- Interpretations and discussion around how Internet Marketing laws are incorporated into campaign planning and execution
- Can a pharmaceutical company comfortably solicit inbound links in support of SEO?
- Should Financial Services companies be limited from using terms such as “best rates?
Looks like it will be a great panel. I will post my slideshow after the presentation.
(Updated on 3.22.12 to add presentation below)
I had the pleasure of attending Fordham Law School’s Center on Law & Information Policy (CLIP)’s Big Data, Big Issues Symposium today, which had a fascinating lineup of many of best thinkers in privacy. The Federal Trade Commission (FTC)’s Julie Brill, delivered a very interesting keynote address about the benefits and dangers of big data, as well as the evolving privacy concerns. The address is well worth a read.
I had a chance to chat with Commissioner Brill after her speech, and asked her thoughts about privacy policies and terms of service that allow for unrestricted and unlimited use of data, such as the infamous Skipity policies. Commissioner Brill stated that, given that most users don’t read privacy policies and terms of service, the FTC is very concerned by these types of one-sided policies. She mentioned that the aggregation and use of data outside of the context of collection is something that the FTC hopes to issue guidance on in the future, and may well be unfair and deceptive regardless of a consumer’s consent.
My takeaway from the chat is that consumer consent will not insulate a website from FTC scrutiny, and that the reasonable expectations of a consumer may dictate the FTC’s considerations of whether a policy is unfair or deceptive, especially given that so little attention is paid to these policies by consumers. However, at the same time, it is important that policies reflect the company’s actual practices.
To understand the genesis of “Do Not Track” it is important to understand what online tracking is and how it works. If you visit any website supported by advertising (as well as many that are not), a number of tracking objects may be placed on your device. These online tracking technologies take many forms, including HTTP cookies, web beacons (clear GIFs), local shared objects or flash cookies, HTML5 cookies, browser history sniffers and browser fingerprinting. What they all have in common is that they use tracking technology to observe web users’ interests, including content consumed, ads clicked, and other search keywords and conversions to track online movements, and build an online behavior profiles that are used to determine which ads are selected when a particular webpage is accessed. Collectively, these are known as behavioral targeting or advertising. Tracking technologies are also used for other purposes in addition to behavioral targeting, including site analytics, advertising metrics and reporting, and capping the frequency with which individual ads are displayed to users.
The focus on behavioral advertising by advertisers and ecommerce merchants stems from its effectiveness. Studies have found that behavioral advertising increases the click through rate by as much as 670% when compared with non-targeted advertising. Accordingly, behavioral advertising can bring in an average of 2.68 more revenue than of non-targeted advertising.
If behavioral advertising provides benefits such as increased relevance and usefulness to both advertisers and consumers, how has it become so controversial? Traditionally, advertisers have avoided collecting personally identifiable information (PII), preferring anonymous tracking data. However, new analytic tools and algorithms make it possible to combine “anonymous” information to create detailed profiles that can be associated with a particular computer or person. Formerly anonymous information can be re-identified, and companies are taking advantage in order to deliver increasingly targeted ads. Some of those practices have led to renewed privacy concerns. For example, recently Target was able to identify that a teenager was pregnant – before her father had any idea. It seems that Target has identified certain patterns in expecting mothers, and assigns shoppers a “pregnancy prediction score.” Apparently, the father was livid when his high-school age daughter was repeatedly targeted with various maternity items, only to later find out that, well, Target knew more about his daughter than he did (at least in that regard). Needless to say, some PII is more sensitive than others, but it is almost always alarming when you don’t know what others know about you.
Ultimately, most users find it a little creepy when they find out that Facebook tracks your web browsing activity through their “Like” button, or that detailed profiles of their browsing history exist that could be associated with them. According to a recent Gallup poll, 61% of individuals polled felt the privacy intrusion presented by tracking was not worth the free access to content. 67% said that advertisers should not be able to match ads to specific interests based upon websites visited.
The wild west of internet tracking may soon be coming to a close. The FTC has issued its recommendations for Do Not Track, which they recommend be instituted as a browser based mechanism through which consumers could make persistent choices to signal whether or not they want to be tracked or receive targeted advertising. However, you shouldn’t wait for an FTC compliance notice to start rethinking your privacy practices.
It goes without saying that companies are required to follow the existing privacy laws. However, it is important to not only speak with a privacy lawyer to ensure compliance with existing privacy laws and regulations (the FTC compliance division also monitors whether companies comply with posted privacy policies and terms of service) but also to ensure that your tracking and analytics are done in an non-creepy, non-intrusive manner that is clearly communicated to your customers and enables them to opt-in, and gives them an opportunity to opt out at their discretion. Your respect for your consumers’ privacy concerns will reap long-term benefits beyond anything that surreptitious tracking could ever accomplish.
We often get questions from both clients and journalists (e.g., here, and here) regarding liability for posting content on the internet, most of it centering around the same basic premise: “Why can Company X post this content on their website? How is that legal? Isn’t that an invasion of privacy?”
In most cases, the answer can be found in Section 230 of the Communications Decency Act of 1996, 47 U.S.C. § 230 (“CDA”). The act provides immunity for Internet Service Providers (read: websites, blogs, listservs, forums, etc.) who publish information provided by others, so long as they comply with the Digital Millennium Copyright Act of 1998 (“DMCA”) and take down content that infringes the intellectual property rights of others. In order to understand the CDA and DMCA, it is helpful to understand how each came about.
The United States has historically favored free speech, with certain limitations. Under the law, a writer or publisher of harmful information is treated differently than a distributor of that information. The theory behind this distinction is that the speaker and publisher have the knowledge of and editorial control over the content, whereas a distributor might not be aware of the content, much less whether it is harmful. Thus, if a writer publishes defamatory content in a book, both the writer and the publisher can be held liable, whereas a library or bookstore that distributed the book cannot.
Initially, courts found a distinction in liability based on whether the website was moderated. An unmoderated/unmonitored website was considered a distributor of information, rather than a publisher, because it did not review the contents of its message boards. Conversely, courts found a moderated/monitored website to be a publisher, concluding that the exercise of editorial control over content made it more like a publisher than a distributor – and thus the website was liable for anything that appeared on the site. Unsurprisingly, this created strong disincentives to monitoring or moderating websites, as doing so increased potential liability.
Given the sheer amount of information communicated online, the potential for liability based on third-party content (i.e. user comments on a blog, website or web bulletin board) threatened the viability of service providers and free speech over the internet.
Congress specifically wanted to remove these disincentives to self-moderation by websites and responded by passing the CDA. The CDA immunizes, with limited exceptions, providers and users of “interactive computer services” from publisher’s liability, so long as the information is provided by a third party (interactive computer service is defined broadly, and covers blogs). This immunity does not cover intellectual property claims or criminal liability, and of course the original creator of the content is not immune. That means a blogger or commentator is responsible for his/her own comments, though not for the submitted content of others (even if it violates a third-party’s privacy, or is defamatory, etc). Generally, the CDA will cover a website that hosts third-party content, and exercises editorial functions, such as deciding whether to publish, remove or edit material does not affect that immunity unless those actions materially alter the content (e.g.. changing “Aaron is not a scumbag” to “Aaron is a scumbag” would be a material alteration, whereas cropping a photo or fixing typos would not).
Accordingly, websites that post only user submitted content (even if the website encourages or pays third parties to create or submit content) are protected under the CDA, and immune from liability, with two major exceptions. The CDA does not immunize against the posting of criminally illegal content (such as underage pornography), and it does not immunize against the posting of another’s intellectual property without permission. Tasked with balancing the need to protect intellectual property rights online, as well as the various challenges faced by websites that lead to the CDA, Congress implemented the DMCA. The DMCA creates a safe harbor against copyright liability for websites, so long as block access to allegedly infringing material upon receipt of a notification from a copyright holder claiming infringement.
Ultimately, protecting yourself from liability under the CDA and DMCA or protecting your intellectual property rights online can be tricky. If you have any questions, feel free to contact us.
“Putting Privacy First” was originally published in the August 2011 edition of TechNews.
Many businesses view legal compliance as a necessary evil and an obstacle to profits. Thus, compliance is often made a mere formality. Dealing with the complex privacy and data protection rules and regulations is often viewed no differently – be it industry-specific rules such as HIPAA (healthcare), age-specific rules such as COPPA (online marketing to minors), agency-specific rules (i.e., SEC or FTC rules), the rules and regulations of each individual state, or even the various foreign laws such as the Data Protection Act (applies to businesses which conduct any business with many European nations). However counterintuitive it may be for some, forward-thinking businesses do not view privacy and data protection compliance as a necessary drag on revenue, but instead, they use it as a marketing tool to distinguish themselves from the competition and grab an increased market share.
As privacy and data breach issues continue to make front page news on a near-daily basis, and with the U.S. Congress working on sweeping new privacy laws, such compliance concerns are increasing in magnitude and importance. The reality is that whether you are aware or not, the various privacy and data protection laws impact and govern the operations of almost all businesses. For example, if you can answer “Yes” to any of these questions, there are privacy and data protection laws that govern your operations: Do you accept credit cards for payment? Do you gather any personal information about your customers, patients, employees, members or vendors? Do you electronically store any data on your computers or servers? Do you sell or market on the Internet? Do you conduct any business with, or market your business to, any person or entity located in another country? Are you in the financial industry? Do you seek to conduct any credit checks on potential employees or customers? The above only addresses a tiny fraction of the activities which subject you to regulation.
So what can and should a business do to not only survive, but actually thrive in this ever-changing regulatory environment? The answer is quite simple – be compliant and market the advantages of your privacy policies.
As acknowledged by the Washington Post on July 18 in “Tech IPO’s Grapple With Privacy,” Google did not have to deal with online privacy in 2004 as such a concept did not exist. Times have certainly changed. On the same day as the Washington Post article, the New York Times reported in an article entitled “Privacy Isn’t Dead. Just Ask Google+” that “Rather than focus on new snazzy features — although it does offer several — Google has chosen to learn from its own mistakes, and Facebook’s. Google decided to make privacy the No. 1 feature of its new service.” Google+ represents a significant attempt by Google to break Facebook’s near stranglehold on social media. Given Google’s past success, it is no surprise that Google has attacked privacy concerns head-on, and turned consumers’ concern for privacy into a marketing bonanza. Such a strategy has been used successfully in the automobile industry for years by companies such as Volvo, Subaru and Mercedes; each of whom turned consumer concern about automobile safety into a marketing opportunity to distinguish themselves from the competition by marketing their superior safety features.
The obvious next question is how does a business use consumers’ privacy concerns as a marketing tool? The answer is to acknowledge your customers’ concerns, explain how and why your business cares about the customer more than your competitors, and that you will keep them safe. To accomplish this goal, you must first determine which regulatory scheme(s) govern the operation of your business. Second, you must determine the best method for compliance with the applicable law, and whether it makes business sense to implement privacy and data security policies which go beyond the minimum required by law. Third, you should examine how, if at all, your competitors address and promote their privacy obligations. Fourth, you must develop a strategic plan to promote to your customers the superiority of your privacy and data security policies. Importantly, you must not only inform your customers of what your privacy and data security policies are, but how such policies help and protect your customers. For example, Mercedes realized that people were scared of getting injured in car crashes, so their advertisements often explained how Mercedes technology would help avoid accidents (i.e., anti-lock brakes) and how they would protect you if you did crash (i.e., airbags and crumple zones). The same applies to privacy and data protection concerns. In the end, by carefully planning out and implementing each of the above four-steps, you will avoid regulatory problems while simultaneously gaining a leg up on the competition.
Do-Not-Track and Online Behavioral Advertising
If you’ve been listening, you are aware of the Federal Trade Commission’s December 2010 Preliminary Staff Report: Protecting Consumer Privacy in an Era of Rapid Change. (Update: The final FTC Privacy Report has been released.) You also know the Commission has challenged providers to create “Do-Not-Track” technology allowing users to opt-out from on-line behavioral advertising. Reportedly, those things are already in the works. This sounds great, especially to a hermit curmudgeon like me (I can’t delete Flash cookies fast enough). But what are some of the implications of this?
There’s a funny and intriguing article by Jack Shafer on Slate.com in which he ponders who is in the best position to create a web browser that provides robust security for the user. While Mr. Shafer points out that he is not against advertising, he notes it’s not in the best interest of developers to provide iron-clad browsers preventing web-tracking technology because of financial connections to advertising revenue. He also perhaps aptly notes, while he is in favor of the legitimate uses for cookies, “too many Web entrepreneurs observe no limits when they decide to snoop.”
Mr. Shafer postulates there may be a market for such a browser, but includes a quote (sure to become a classic in my book) from his colleague Farhad Manjoo: “I doubt there’s a market for such a browser. People don’t care about privacy. They just say they do. If they did, they wouldn’t use Facebook.”
So, which is it? Are users really ready to give up free content in exchange for privacy? According to a recent Gallup poll 61% of individuals polled felt the privacy intrusion presented by tracking was not worth the free access to content. 67% said that advertisers should not be able to match ads to specific interests based upon websites visited.
What about the other 33-39%? Do they really not care, or are they not willing to give-up the Web they know and love?
How about exploring another option? What if I go to Harry’s Widget Shoppe and I decide to tell Harry that I am extremely interested in buying maroon widgets (we all know they’re the best)? Suppose I also tell Harry to contact me immediately if he comes across any maroon widgets (not blue, yellow or green – just maroon). Why should I have to receive 264 e-mails and see 400 ads in the course of 48 hours from Mildred telling me about how great her blue widgets are? I don’t want blue widgets! I had plenty of them, and they’re nothing but trouble. By the same token, I’m not so hip on seeing 918 ads about teeth whitening either (Note to self: make an appointment with the dentist).
Assuming Mildred paid to obtain my “widget” profile from Harry or one of his network servers, what did she really get for her money? Not much. She probably guaranteed that I won’t buy any widgets from her ever. Well, maybe, if it’s an especially rare maroon widget…you know…like the ones with feathers…and she buys me dinner). I also might not be talking to Harry anytime soon, either. But, I digress…
Harry has valuable information about me. Information that may well be worth much more to an advertiser than the fact that I visited Harry’s Widget Shoppe.com. What if Harry asked me if it was okay if he provided my information to others who had maroon widgets? What if Harry also told me that these others with whom he shared my information were contractually obligated not to send my information on to anyone else without my permission? Ye Olde Only Maroon Widget Shoppe.com might be willing to pay Harry dearly for that information, I might get my pick of lovely maroon widgets, I won’t see constant ads from other widget sellers in which I have no interest, and my in-box would be much more manageable. Oh, and by the way, I would not feel as if I had totally lost control over information about me.
At its heart, control is a form of choice. While realistically, we have very little real choice left in this world, there are some things we still would like to control. I figure a good proportion of that 33-39% might say the same. I might be willing to share some information, and let you pass it on, if I knew you were not surreptitiously taking it from me, and abiding by my wishes.
So, I suppose the upshot is, it looks like it’s time for business to start asking me for my information and what controls can be placed on it. Through that process alone, the real value in the information is revealed, and I don’t feel swindled.
Just some thoughts, but I could be wrong. Let’s take another poll.