The Federal Trade Commission recently came out with its report (dated December 2012), “Mobile Apps for Kids: Disclosures Still Not Making the Grade.” The report states that despite the FTC’s efforts, including an earlier February 2012 report, “Mobile Apps for Kids: Current Privacy Disclosures
are Disappointing” and various educational outreach programs children’s app developers still do not provide sufficient information to parents on what information the app collects. The apps still collect information such as the device ID, geolocation, or phone number without disclosure. The report also states that a number of apps do not disclose that they contain interactive features such as advertising, the ability to make in-app purchases, and links to social media. It is a quick and dismal read available at: http://www.ftc.gov/os/2012/02/120216mobile_apps_kids.pdf
What is interesting about the report is what is missing – the advertiser’s view. How much revenue do the in-app purchases and advertising in children’s apps generate? Is this a case of small companies making bad or ill-informed choices in the rush to get an app into the market or is there actually a strong industry in serving targeted ads to kids through apps? If it is the latter, as a society how do we distinguish this level of advertising from the barrage of Saturday morning cartoon advertising with which I grew up?
I suspect the answer to the previous question is that this advertising is targeted, based on the device ID, to that particular child based on his/her app/online usage. The FTC Report raises this issue but there is nothing in the report to support it is happening. Unless the child actually owns the device any targeting will be confused with the parent’s phone use as well and, therefore, be pretty limited if not worthless.
One anecdotal story in the report shows how inept this advertising can be. The report cites the example of one children’s app that stated it had no advertising, but had advertising and the advertising was for meeting singles. If I were the company who paid for that singles ad I would be furious to know that is wasted on being shown to the wrong audience and generating ill will by having the ad go to children. The other question is how disjointed can a company be to claim in policy it doesn’t advertise but then blatantly advertise? That seems symptomatic of a company that is slapping together app components in order to rush an app into the market.
The Report highlights that the concern with children’s app is the concern with all apps – lack of transparency of what is collected, how it is used and over collection of data. With children’s apps there is, of course, increased harm as the child is more likely to trust the app company or not really care about what information is collected. It is also likely that it is the child that is downloading the app. How many parents do you know that look for apps to download for their kids? The more probable scenario is that the child has the phone and wants to download the app. This means parents may need training about privacy policies, as I suspect the likely answer to a child’s question about downloading an app is more about cost than about data collection.
A Jupiter Networks study of over 1.7 million apps on the Google Play market from March 2011 to September 2012 found that the free apps were far more likely to collect data on the app user than the paid for apps.
In general, the study found that free apps are 401 percent more likely to track location and 314 percent more likely to access user address books than their paid counterparts. The breakdown is:
- 24.14 percent of free apps have permission to track user location versus 6.01 percent of paid apps;
- 6.72 percent of free apps have permission to access user address books, versus 2.14 percent of paid apps;
- 2.64 percent of free apps have permission to silently send text messages, versus 1.45 percent of paid apps;
- 6.39 percent of free apps have permission to clandestinely initiate calls in the background, versus 1.88 percent of paid apps; and
- 5.53 percent of free apps have permission to access the device camera, versus 2.11 percent of paid apps.
While much of this app functionality appears to be associated with advertising, the Jupiter study found several apps that collected data, such as location data, for inexplicable reasons. Some of the apparently unnecessary app use of the phone was explained after contacting the app developer. For example, financial apps needed to make calls in order to reach the user’s financial institutions and one racing game need to use the camera as the premium version of the game allowed the user to use their picture as part of the game. The study did find that racing games and casino games were more likely to collect data and use phone functions than other apps.
The study wisely suggests that it is not enough for an app to inform the potential user of the phone functionality the app will use when downloaded. The potential user also needs to know why the app needs the functionality. An interesting follow up study would be to see if people would opt to pay for the app if they were clearly aware of the privacy implications of downloading the free app.
For more information on the Jupiter study: http://forums.juniper.net/t5/Security-Mobility-Now/Exposing-Your-Personal-Information-There-s-An-App-for-That/ba-p/166058
Researchers from the University of Indiana and the Naval Surface Warfare Center have created a demonstration app, PlaceRaider, which uses the camera in your phone to take photos throughout the day (using the accelerometer to know when your camera is upright and not sitting on your desk or even, possibly , in your pocket), send those photos back to a specified server and then, using specialized algorithms, the photos are then stitched together to generate 3D pictures of the rooms you’ve been in, including any valuables, financial documents and other sensitive information in those rooms.
While there have been blog posts and articles on the scariness of this app, it is unlikely to be deployed widely. It would require the bad guys to develop the software to stitch together all the different pictures as well as sort through all the pictures from all of the different users. There are easier ways to steal. However, it may be handy for targeted deployment against specific individuals as part of industrial or political espionage.
A copy of the research paper may be found at: http://arxiv.org/abs/1209.5982
I recently, per the comment date August 24, 2012, found out that this webpage contains third party trackers. Specifically, visitors to this page are tracked by Quantcast, Scorecard Research Beacon, Twitter Button and WordPress Stats. This was pretty alarming news, particularly for a privacy blog site. My first inclination was to try to block everything. I ended up going with my second inclination, which was to provide you with what these trackers do and information on tools you can use to block them.
The trackers were exposed by the comment from Jamie Powers , through his use of Collusion, which I have written about before (see: June 10, 2011 post — collusion-a-browser-tool-that-tracks-the-trackers-tracking-you ). Jamie stated that in addition to the trackers listed in this post he also found Facebook, DoubleClick, adnxs and atdmt. I added the Ghostery plug in to my browser. Ghostery shows the trackers on each webpage you visit and provides you with a means to easily block them. I had Ghostery on my last computer, but failed to install it when I set up my new computer. Ghostery showed the following companies are tracking you on this site:
Quantcast. Quantcase is an audience measurement tool that lets advertisers reach particular demographics. Quantcast states that it does not collect any personally identifiable information . Quantcast uses web beacons and cookies to track the websites you visit to develop a profile of you based on when, where, and at what times your browser loads one of its web beacons. Quantcast recently settled a lawsuit for $2.4 million U.S. for its use of Locally Stored Objects or “Super Cookies” (cookies that respawn after you delete them). For more information on Quantcast and privacy see: http://www.quantcast.com/privacy.
Scorecard Research Beacon. Scorecard Research is a service of Full Studies, Inc. which is part of comScore, inc. comScore is one of the larger market research companies that reports on Internet behavior and trends. Scorecard Research is the research part of comScore. Scorecard Research uses online services and cookies to collect data. The data collected is information such as a timestamp, the URL, and the title of the web page. The data is analyzed and presented in the aggregate so it is not a this computer went from this page to that page but more along the lines of 35% of computers went from this page to that page. Information on Scorecard Research may be found at: http://www.scorecardresearch.com/privacy.aspx
Twitter Button. A twitter button is simply a button that allows one of my posts to be tweeted. However, Twitter tracks visitors to this webpage who are logged into a Twitter account even if they don’t tweet anything from this page. Twitter has told the Wall Street Journal that it deletes the tracking data “quickly” and only collects it in anticipation of future services (see: http://online.wsj.com/article/SB10001424052748704281504576329441432995616.html?mod=WSJ_hp_MIDDLENexttoWhatsNewsThird). However, there is a way, using Google Analytics, for me to use the Twitter Button for tracking. I don’t, but if you are interested in seeing how it is done, see: http://www.socialmediaexaminer.com/how-to-track-tweets-facebook-likes-and-more-with-google-analytics/.
WordPress Stats. WordPress Stats collect browser information, date/time, demographic data, page views and IP address. This data is aggregated and share with third parties. While I don’t see IP addresses, WordPress does provide me with daily information as to which pages of my blogs were visited, what nations visitors come from (a lot of people all over the world seem to care about privacy), and the search terms used to find this website.
It is important to note that even if these different trackers don’t collect personally identifiable information, it is possible to identify you either from the websites you visit or from combining the tracking information with other database information.
How to Stop the Tracking. I have already mentioned the Ghostery plug in, which is available for Internet Explorer, Firefox and Chrome. Ghostery works well to show the trackers and allow you to block them. I know there are other plug-ins out there, such as No Script, and these plug-ins have their adherents. You can even use more than one in your browser.
Why Don’t I add the WordPress Tracker Blocker Plug-in? That is a good question. I looked into it but, currently decided not to for several reasons:
- Many sites have trackers and blocking it here won’t help you when you go to those other sites. Since this site is to educate about privacy it seems more fitting to talk about the issue and the tools.
- Trackers help make for better webpages. I use the information I receive from WordPress to try to write about things that interest people. So far the most read post was on Collusion.
- Advertising pays for much of the Internet and the trackers are part of the Internet advertising machine. The problem is that the adage, “when you get something for free on the Internet that means you are the product” is not well publicized. WordPress provides a place for blogging and tools for blogging without charge to me and you don’t have to pay to read this.
I am not sure I made the right choice. I will publicize on the “About Me” page about the trackers and Ghostery so readers can make their own choice.
I always enjoy catching up on the occasional TED (Technology Education and Design) talk. I recently saw one in which Jeff Carter presented on the upcoming use of iris scans as a means of authenticating individuals, thus eliminating the need for the common user name/password. Jeff Carter is the Chief Strategy Officer for Eyelock, which provides technology for iris can authentication/authorization. No surprise Jeff Carter really likes iris scans for authentication and looks forward to the day when iris scans will replace user name/password.
While user name/password has its drawbacks, such as requires a person to remember a series of characters that, by design, have to be apparently difficult to remember, I am not convinced that iris scans will provide the world with the unhackable authentication that will save the world from the issues of identity theft.
This is because the iris scan or any biometric is, ultimately, stored digital information. Any digitally stored information can be digitally copied and used by someone else. This is particularly a problem with biometrics since biometrics are, by definition, uniquely tied to an individual. You can’t readily get a new iris if someone steals the data on your iris and impersonates you online.
If biometrics become the sole means for identifying you and allowing you access then identity theft would be worse than it is now. At least now if someone steals your user name/password information you can 1) change it and 2) it should only affect one or, at most, a few sites (providing you aren’t foolish enough to use the same user name/password for everything). Using biometric data will likely increase identity theft as it would give the thief just one theft that will allow access to everything you log into, such as all of your bank accounts, credit card information, medical records, and email.
The answer, of course, is to not to have all websites use biometrics the same way. For example, a web site could encrypt the biometric information combined with other, website specific, information so that even if the website were hacked at least the identities that were stolen could only be used on that site. That kind of approach would truly limit identity theft.
The Jeff Carter TEDx video can be seen at: http://www.youtube.com/watch?v=Fk1LVGX64QE
First the article: http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?_r=1&pagewanted=all
The article provides a great review on how and why habits are formed, as well as how retailers try to create buying habits. Most of our buying habits are, of course, routine. There are moments, however, when life events create an opportunity for a change in our habits, such as marriage, pregnancy and divorce. Mining “big data” provides the opportunity to identify certain buying changes that are correlated with life changes. For example, the article notes that switching coffee brands is correlated with marriage, switching cereals is correlated with moving into a new house and switch beer is correlated with divorce (bigger question, of course, is “why?”).
As described in this article, and denied by Target, Target identified certain buying behaviors, such as purchasing scentless lotions, as being correlated with pregnancy. Earlier studies have shown pregnancy is a great time to try to shift shopping behaviors. Target wanted to be the first retailer to be able to capitalize on pregnancy induced shopping changes before other retailers. Using these hidden purchasing correlations Target targeted (you find a better word) its statistically pregnant shoppers with baby ads. All was fine until an angry father marched into Target asking why his high school daughter was receiving the coupons. Poor dad, he later learned Target was right.
Having learned its lesson, Target now, apparently, slips in its targeted coupons with more random ones so the targeting is less obvious. This seems to work just fine but there is a loss for Target as they know they are presenting its consumers with coupons they probably won’t use just to hide the coupons they will use.
Everyday we spew data about ourselves into the world, particularly with regards to our online activity. This should lead to an ideal — I receive rewards, such as coupons and freebies for things I like. However, when this happens as if by magic we become alarmed. When we learn that we are receiving rewards not based on magic but based on information captured about us we become more alarmed — alarmed to the point that many may not want to receive the benefit if it means that the benefit is predicated on an analysis of the data we spew into the world. Why is this the case?