Information Governance (InfoGovernance) is the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archiving and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information to enable an organization to achieve its goals. Information governance should be an element in planning an enterprise's information architecture.

(Gartner Hype Cycle for Legal and Regulatory Information Governance, 2009, December 2009).

An Engagement Area (EA) is an area where the commander of a military force intends to contain and destroy an enemy force with the massed effects of all available weapons systems.

(FM 1-02, Operational Terms and Graphics, September 2004).

Saturday, January 24, 2015

DESI VI Workshop on Machine Learning in E-Discovery & Information Governance

Organizing Committee:
  • Jason R. Baron, Drinker Biddle & Reath LLP; University of Maryland
  • Jack G. Conrad, Thomson Reuters
  • Amanda Jones, H5
  • Dave Lewis, David D. Lewis Consulting
  • Douglas W. Oard, University of Maryland
Purpose: The DESI VI workshop (June, 8, 2015, San Diego) aims to bring together researchers and practitioners to explore innovation and the development of best practices for application of search, classification, language processing, data management, visualization, and related techniques to institutional and organizational records in e-discovery, information governance, public records access, and other legal settings.

Friday, January 23, 2015

What CENTCOM Can Teach You About Cloud Security

By Sekhar Sarukkai
Individuals calling themselves the “CyberCaliphate” hacked into the Twitter feed for the U.S. military’s Central Command last week, and for 40 minutes posted photos, links, and videos before the account was shut down. They also gained access to Central Command’s YouTube profile, updating the banner image and posting Islamic State propaganda videos. While no confidential or top secret information was stolen, and hackers did not gain access to the U.S. Department of Defense’s network, the incident illustrates the risks government agencies face as they increasingly rely on cloud services to fulfill their mission and communicate with the outside world. […]

Thursday, January 22, 2015

Podcast: The Government’s Dark Data: A Decade of Discovery

By Sharon Nelson, John Simek and Guest Jason Baron
In this episode of Digital DetectivesSharon Nelson and John Simek interview Jason Baron about information governance, dark data, open government, and his role in The Decade of Discovery. Baron talks about the increasing amount of electronic data affecting the Freedom of Information Act (FOIA) and the discussion e-discovery experts need to have about providing public access to government records.

Wednesday, January 21, 2015

Technology-Assisted Review: Are We All In Agreement? (Cartoon and Clip)

The Cartoon and Clip of the Week for January 21, 2015

Daily we read, see and hear more and more about the many technology approaches, quality control measures and defensibility risks associated with technology-assisted review. This week’s cartoon and clip highlights the importance of agreement among industry leaders on key technology-assisted review issues (cartoon) and provides a short list of recent articles that advise caution in blindly agreeing with vendor and thought leader assertions (clip).


Three Recent Technology-Assisted Review Articles with Cautionary Exhortations

Provided below is a short list of several recent articles where the authors highlight the importance of carefully considering claims and assertions when evaluating technology-assisted review approaches.
“Be cautious in evaluating any criticisms you may read of ei-Recall from persons with a vested monetary interest in the defense of a competitive formula, especially vendors, or experts hired by vendors. Their views may be colored by their monetary interests. I have no skin in the game.”
2.  Every Form of Recall Has Its Price by Herb Roitblat
“We should, be cautious, and even skeptical, however, of claims that any method currently in use suffers from fatal flaws or provides a “gold standard.” That supposed fatal flaw may be more a reflection of ignorance than one of statistical or methodological failure. Gold standards take time to emerge by consensus, they cannot be claimed by fiat.”
“TAR/PC proponents point out that TAR is generally as effective as the “gold standard” of human review in identifying relevant records. However it has at least two soft spots compared to what is achievable with alternative technology: text dependence & document unitization.”
Click here to follow a running list of recent posts on technology-assisted review.

2014 Year-End E-Discovery Update

By Gibson Dunn
In our Mid-Year E-Discovery Update , we reported that 2014 was shaping up to be the “year of technology” in e-discovery. The remainder of the year more than lived up to those expectations.
Powerful new data analytics tools have become available for search and review, predictive coding pricing is becoming more accessible and its use appears to be gaining more traction, e-discovery technologies are becoming available through the cloud, and technologies that automate aspects of information governance are becoming increasingly available. New technologies are also creating new challenges, such as the increasing use of texting and other applications on […]

Tuesday, January 20, 2015

Every Form of Recall Has Its Price

By Herbert L Roitblat, Ph.D.
When considering the measurement of eDiscovery accuracy, there are many alternatives available. As I have said elsewhere , measurement of eDiscovery accuracy is a critical enterprise, but the pendulum may have swung too far toward complex measurement. We should not let our concern about measurement overwhelm our process. The most common of measure of eDiscovery accuracy is Recall which is: The challenge comes from the observation that we do not know the value of these two variables exactly. If we knew exactly which documents were responsive, then we would not […]

IP Rights in Big Data

By Katherine Spelman and Holly Towie
In our last Big Data blog posting, we cautioned that the protection of the intellectual property rights (IP) in Big Data may warrant its own focus.  While there are legitimate concerns about finding IP  in data, because data may be an inert lump of code, bits, or pieces of information, it is worthwhile to think about the different kinds of IP that arise in conjunction with and in the context of Big Data.  This blog entry focuses on the IP opportunities ‘in relation to’ Big Data.
The IP asset candidates to consider for Big Data are in four possible categories:  utility patents, trademarks, copyrights and trade secrets.