New Challenges With Body Camera Footage
Chris Cruz gave a very insightful lecture into a topic many students, including myself, may be a bit underinformed about: technology use in government. Before this lecture, I assumed the technology used within the government would be extremely outdated and basic, however, Chris gave evidence that points to the opposite. For example, the centralized cloud infrastructure that is used within all departments of the California government. Chris mentioned that the centralized cloud is important for accessibility as well as security, which are definitely two advantages of utilizing the cloud. Another important topic was making high value data sets available to utilize tech to serve Californians. A question was asked in lecture about camera usage in California, and Chris mentioned there is a push to put a body camera on every police and California Highway Patrol officer, however, this poses a great technological challenge with introducing all of that new data. The camera footage should be included as high value data sets, so not only does this footage need to be stored efficiently, it needs to be readily available as well. In a state as big as California, there will definitely be an incredible amount of data gathered per day from body cameras.
Some cities, including some within California, have already began utilizing body cameras regularly in police forces. An interesting example I found described the Buffalo police force, and usage of body cameras outside of the Bills stadium since there is a higher chance for crime around the stadium due to the football games [https://gcn.com/articles/2017/05/11/body-camera-storage.aspx]. As known in current data centers, data input and output expectations are not static; the demands can vary per hour, or even per minute depending on the use case for the data. For example, the usage vs time chart of Google’s search data centers per location would peak during the day, but remain relatively low at nights due to the nature of the workload and the user inputs. However, after a major event, the usage can spike heavily and even go beyond what the data center is capable of. On this note, the article I quoted mentions how fall Sundays can lead to an incredible spike in data due to the Buffalo Bills football games. The Buffalo police force not only needed to store body camera footage, but also “needed to store both data from mounted surveillance cameras and operational data from town employees” and to do so they turned to software defined storage (SDS). (gcn.com) Utilizing SDS allows the police force to use industry standard hardware by decoupling the physical storage hardware from the storage tasks, and performing those tasks in software. The article also mentions that SDS and similar solutions are continually developing from open source communities, which allows users to have the most cutting edge solutions by fostering innovation and collaboration.
Once data is stored efficiently, it is also necessary to be able to utilize that data. Again, with a state as large as California, there will be a massive amount of new data introduced per day, and there are great learning opportunities that arise with this new data set available. However, using actual man-hours to view and parse this footage would be unfeasible as data would be input at a much higher rate than what could be viewed. This is a great application for machine learning and artificial intelligence, which are areas where Stanford excels. Very recently, Stanford researchers performed a study on body camera footage and found “police interactions with black community members are more fraught than their interactions with white community members.” [http://news.stanford.edu/2017/06/05/cops-speak-less-respectfully-black-community-members/]. This study was performed by forming an interdisciplinary team from computer science, linguistics, and psychology departments, and analyzing transcripts from Oakland Police Departments body camera footage. The researchers were able to develop a machine learning model that could detect aspects of speech such as respect, apology, concern. The article also concludes by expressing how much potential there is from analyzing body camera footage, and the implications it has on bettering the community.
It will be very interesting to see not only what kinds of technology is utilized to efficiently store and process this massive new data set, but also the legislative side of storing all of this data. Some data will need to be stored temporarily, and some will need to be permanently; some data will be available for all to see, and some will need to be protected as confidential. This intersection of legislation and technology is exactly where experienced employees such as Chris Cruz will be necessary, and it is exciting to think of the possibilities in the future if all challenges are solved appropriately.
Users who have LIKED this post:
3 comments on “New Challenges With Body Camera Footage”
Comments are closed.
I love this article.
I have a good friend that is a South Central LAPD officer. From his perspective, he does not like the idea of body worn or vehicle cameras. An interesting factor to consider are State requirements for the storage of video. Ultimately, the resolution of the video determines the storage capacity required. The State government mandates that State and Local Law Enforcement Agency’s in California maintain on Premise video for at least 60 days from Vehicle and Body worn cameras, unless in the event where the video may be required as evidence in prosecution of a crime.
http://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201520160AB69
When I worked for Dimension Data in Southern, CA, we had a contract to support Coban Technologies (https://www.cobantech.com/) at LAPD. We were tasked with network security consulting, choosing perimeter designs for their Firewalls, IPS, what encryption protocols were in use and troubleshooting problems related to the upload and storage of video data. LAPD has actually invested a considerable amount of time and money to deploy the technology, but as you stated there are significant challenges.
Users who have LIKED this comment:
One of the earlier lines in your post caught my eye ” I assumed the technology used within the government would be extremely outdated and basic”. While it is clear that Chris is trying to shed this reputation for parts of the California government I don’t think that it is untrue for other arms of government. I also don’t think that this is necessarily a bad thing. While it is undoubtedly true that the government produces and utilizes advanced technology for some purposes (see: https://theaviationist.com/category/stealth-black-hawk/) plenty of evidence points towards this not being the common case (http://www.popularmechanics.com/military/weapons/a19061/britains-doomsday-subs-run-windows-xp/). While sometimes this may be a detriment brought on by distant leadership or an unavailability of funds, it can also be by design. When the need for security or reliability far outweighs the need for performance upgrades older and proven technologies can be a superior option. An example (from personal experience) would be if the military is using a chip manufactured to high specifications and has verified its usage thoroughly in applications both in and out of the field then it makes no sense to upgrade to a new chip which might have unknown bugs. The same can be said of older applications and operating systems.
On a separate note, I think your post raises some interesting questions about public data. In particular what should / shouldn’t be public and whose responsibility is it analyze it. The media today is rife with people misusing data for their own gain, or to make headlines (https://www.washingtonpost.com/posteverything/wp/2017/02/10/crime-stats-should-inform-the-public-trump-is-misusing-them-to-scare-us-instead/?utm_term=.9a5b1d32df35). Publicizing data, at first glance, takes that power out of the hands of the few and puts it into the hands of the many. However if that data requires specialized knowledge and hardware available to only a few to analyze (such as your example from Stanford above) it seems we might still be a step away from truly empowering the public (though in my opinion definitely many steps closer)
Users who have LIKED this comment:
With increasingly complex and large data sets, the process of drawing conclusions from the received information becomes highly difficult, especially when considering the rapidly growing amount of information. The digital universe is large, and it’s only getting larger. The IDC, a global market intelligence firm, estimates that the amount of data “is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes.” With so much information present, drawing proper conclusions from thoroughly analyzed data sets seems to be the primary concern.
In this article, a Stanford University study was brought up that found that “police interactions with black community members are more fraught than their interactions with white community members.” Using the results of this study as supporting evidence for police discrimination against black residents would not be apt, as there are many factors that are unaccounted for. In fact, even Jennefer L. Eberhardt, one of the study’s spearheads, highlighted that “drawing accurate conclusions from hundreds of hours of footage is challenging. Just cherry-picking negative or positive episodes, for example, can lead to inaccurate impressions of police-community relations overall.”
LINKS:
https://www.emc.com/leadership/digital-universe/2014iview/executive-summary.htm
https://source.opennews.org/articles/statistically-sound-data-journalism/
http://news.stanford.edu/press-releases/2017/06/05/cops-speak-less-ommunity-members/