Continue reading this article, and get more law firm business news and information, at FindLaw.com.
via Strategist http://ift.tt/1ABhxSj
Molly K. Land of the University of Connecticut has posted Participatory Fact-Finding: Developing New Directions for Human Rights Investigations Through New Technologies , forthcoming in The Future of Human Rights Fact-Finding (Philip Alston & Sarah Knuckey eds., Oxford University Press, 2015).
Here is the abstract:
This chapter considers the way in which broader participation in human rights fact-finding, enabled by the introduction of new technologies, will change the nature of fact-finding itself. Using the example of a participatory mapping project called Map Kibera, the chapter argues that new technologies will change human rights fact-finding by providing opportunities for ordinary individuals to investigate the human rights issues that affect them. ‘Those who were formerly the ‘subjects’ of human rights investigations now have the potential to be agents in their own right. This ‘participatory fact-finding’ may not be as effective in ‘naming and shaming’ states and companies that violate human rights because the absence of the imprimatur of an established organization may render the information collected vulnerable to critique. At the same time, new and more participatory techniques of investigation will be better suited to other forms of accountability. Participatory fact-finding has the potential to be fact-finding as empowerment — the collection of information and documentation of facts as means for empowering those affected by abuses to advocate for their change. Participatory fact-finding will also be more effective in documenting violations of the positive obligation to fulfill rights than traditional fact-finding methods because they offer opportunities for gathering more data than is possible through victim and witness interviewing.
By supporting local participation, new technologies provide an opportunity to bring the practice of human rights fact-finding into greater alignment with human rights principles. Utilizing new technologies to achieve greater participation in human rights fact-finding will allow human rights organizations to ‘practice what they preach’ — to integrate the principle of participation into their own work in addition to recommending it to states and other duty-bearers. There is and will continue to be a significant need for the kind of fact-finding done by large and established international human rights organizations. Yet documentation projects involving citizens have the potential to be a new kind of fact-finding — to look and function differently than fact-finding as generally practiced by the major international non-governmental organizations and the United Nations. By opening up who can participate in investigation, new technologies will not replace established methodologies, but will instead broaden our understanding of what counts as human rights documentation and the purposes such investigations serve.
Tech for Justice Hackathon+ Austin was held February 21-22, 2015 in Austin, Texas, USA.
The event was organized by the Internet Bar Organization.
Video describing the results and winners of the event is available at: http://ift.tt/1afvsDX
Video of the project presentations at the event is at: http://ift.tt/1afvtYv
The Twitter account for the event is @TechForJustice
One Twitter hashtag for the event was #techforjustice
Click here for a storify of images and Twitter tweets from the event.
Here is a description of the event, from the registration site:
[…] This legal hackathon will gather programmers, lawyers, technologists, UI& UE designers, public and private sector organizations and government agencies to tackle how to allow EVERYONE to avail themselves of our justice system, or to find methods of achieving informal justice.
TECH FOR JUSTICE Hackathon+ Austin is working in partnership with the Texas Supreme Court, the Texas Judicial Council, Texas Legal Services Center, Legal Services Corporation, and more. This hackathon is unique in its direct partnerships with legal and judicial institutions, and will focus on the creation of solutions that will directly apply to the improvement of state court systems, as well as private justice mechanisms.
Participants will spend the weekend of February 21st and 22nd tackling problems and producing proof of concepts and prototypes that will be curated and presented to worldwide audiences. After the Hackathon, participants will be incentivized to continue to develop their ideas through mentoring, data sharing, and public-private partnerships to bring ideas to fruition. […]
Want to attend? Reserve your spot now and let us know if you’re a coder, legal professional, or just want to participate in general.
Want to participate remotely? Sign-up to participate from anywhere in the world.
All registrants will be the first to receive the latest info on the Hackathon and have the opportunity to take part in events prior to the Hackathon.
Want to watch the livestream? Sign up to take a front row virtual seat via our live webcast of the event in Austin. Livestream registration does not allow for participation in the event. If you want to participate, we recommend you sign up for remote participate as soon as possible, as participant numbers will be capped. […]
Here is a description of the event, from the registration site:
[…] This legal hackathon will gather programmers, lawyers, technologists, UI& UE designers, public and private sector organizations and government agencies to tackle how to allow EVERYONE to avail themselves of our justice system, or to find methods of achieving informal justice.
TECH FOR JUSTICE Hackathon+ Austin is working in partnership with the Texas Supreme Court, the Texas Judicial Council, Texas Legal Services Center, Legal Services Corporation, and more. This hackathon is unique in its direct partnerships with legal and judicial institutions, and will focus on the creation of solutions that will directly apply to the improvement of state court systems, as well as private justice mechanisms.
Participants will spend the weekend of February 21st and 22nd tackling problems and producing proof of concepts and prototypes that will be curated and presented to worldwide audiences. After the Hackathon, participants will be incentivized to continue to develop their ideas through mentoring, data sharing, and public-private partnerships to bring ideas to fruition. […]
Want to attend? Reserve your spot now and let us know if you’re a coder, legal professional, or just want to particpate in general.
Want to participate remotely? Sign-up to participate from anywhere in the world.
All registrants will be the first to receive the latest info on the Hackathon and have the opportunity to take part in events prior to the Hackathon.
Want to watch the livestream? Sign up to take a front row virtual seat via our live webcast of the event in Austin. Livestream registration does not allow for participation in the event. If you want to participate, we recommend you sign up for remote participate as soon as possible, as participant numbers will be capped. […]
Here is a description from the announcement:
During the weekend of February 21-22, 2015, please join Code for Boston, the Commonwealth of Massachusetts, and municipal partners at the MIT Media Lab for a weekend of discussion, civic hacking, and data-driven exploration. The event will take place at the MIT Media Lab in the “Atrium” on the third floor.
The event will bring together government employees, technologists, and community members to focus on civic and social issues that face our local communities including public safety and justice, health and human services, economic development, and citizen engagement. We will also be celebrating open data in MA as part of International Open Data Day.
The Human Dynamics Lab will convene a special interest group both days exploring how Computational Law and Legal Informatics can provide applied solutions for CodeAcross civic and social issues. The Legal group will feature break out sessions to rapid prototype approaches and options. For more information on this special interest group, please contact Dazza Greenwood or contact us here.
Click here for the registration site.
J. B. Ruhl and Daniel Martin Katz have posted Measuring, Monitoring, and Managing Legal Complexity , forthcoming in Iowa Law Review.
Here is the abstract:
The American legal system is often accused of being “too complex.” For example, most Americans believe the Tax Code is too complex. But what does that mean, and how would one prove the Tax Code is too complex? The descriptive claim that an element of law is complex, and the normative claim that it is too complex, should be empirically testable hypotheses, yet in fact very little is known about how to measure legal complexity, much less to monitor and manage it.
Legal scholars have begun to employ the science of complex adaptive systems, also known as complexity science, to probe these kinds of descriptive and normative questions about the legal system. This body of work has focused primarily on developing theories of legal complexity and positing reasons for, and ways of, managing it. Legal scholars thus have skipped the hard part — developing quantitative metrics and methods for measuring and monitoring law’s complexity. But the theory of legal complexity will remain stuck in theory until it moves to the empirical phase of study, and thinking about ways of managing legal complexity is pointless if there is no yardstick for deciding how complex the law should be. In short, the theory of legal complexity cannot be put to work without more robust empirical tools for identifying and keeping track of complexity in legal systems.
This Article explores legal complexity at a depth not previously undertaken in legal scholarship. Part I orients the discussion by briefly reviewing the scholarship using complexity science to develop descriptive, prescriptive, and ethical theories of legal complexity. Parts II through IV then shift to the empirical front, identifying potentially useful metrics and methods for studying legal complexity. Part II draws from complexity science to develop methods that have or might be applied to measure different features of legal complexity, including metrics for agents, trees, networks, computation, feedback, and emergence. Part III proposes methods for monitoring legal complexity over time, in particular by conceptualizing what we call Legal Maps — a multi-layered, active representation of the legal system network at work. Part IV concludes with a preliminary examination of how the measurement and monitoring techniques could inform interventions designed to manage legal complexity through use of currently available machine learning and user interface design technologies.
Alan deLevie of 18F has launched two tools for working with legal data from the U.S. Federal Communications Commission (FCC):
Provides permalinks to the FCC Record
A tool for exporting filings from the U.S. Federal Communications Commission as structured data
HT @adelevie (here, here, and here)
Cynthia Farina , Hoi Kong , Cheryl Blake , Mary Newhart , and Nik Luka have published Democratic Deliberation in the Wild: The McGill Online Design Studio and the RegulationRoom Project, Fordham Urban Law Journal, 41, 1527-1580 (2014).
The full text appears to be available from commercial vendors.
Here are excerpts from the introduction:
[…] Here we describe two projects, both being conducted by university researchers, that use innovative technological tools to motivate and support broader, better citizen engagement in government decision making. One is a digitally-mediated community-based urban design studio. […] A collaboration among law and urban planning faculty of McGill University and a Montréal community organization, this project aims to involve area residents in the redevelopment of a forty-five acre post-industrial site in Montréal’s midtown Bellechasse sector. […] The second is RegulationRoom.org, an online website that supports informed public participation in the process of making government regulations (rulemaking). […]
[…] the projects […] aim to discover how the digitally empowered citizen-participant can be meaningfully engaged through processes designed to prime deliberative discussion and knowledge production, rather than mere voting and venting.
The Article proceeds as follows: Part I discusses the problematic yet promising relationship between the theory of deliberative democracy and the practice of public participation in government decision making. Part II gives an overview of the MODS Bellechasse project and the RegulationRoom project, and then focuses on how each project uses technology and human effort to lower the principal barriers to broader, better public participation. Part III discusses lessons learned from the projects and identifies challenges that remain. […]
Colin Starger of the University of Baltimore has posted Hacking Mass Incarceration , at In Progress .
Here are excerpts from the post:
[…] Here at In Progress, we hope to contribute to collective efforts by hacking Supreme Court doctrine. The idea is to open up the law around prisons and make connections to help generate anti-mass incarceration constitutional arguments. What’s more, the goal is to crowdsource the connection-making process. To do this, I am experimenting with doctrinal map designs to facilitate non-specialist learning of complex doctrinal systems.
Last time, I charted out a series of Eighth Amendment doctrinal networks and found them large and unwieldy. It’s not realistic to expect anybody to read over 100 Supreme Court cases while mining for anti-mass-incarceration arguments. So this time, I want to narrow the focus. Below find the 2-degree citation network linking the Court’s 2011 prisoner overcrowding decision Brown v. Plata to 1958′s Trop v. Dulles, a seminal pronouncement about the Eighth Amendment’s meaning as a guarantee of human dignity in light of evolving standards of decency. […]
Note the new design feature of the map above: its interactivity. Click on the map and then click on any of the opinions. You’ll find yourself looking an HTML deck that (a) has a very quick summary of the case holding; and (b) contains links to open resources about the case provided by CourtListener, Cornell Legal Information Institute, Oyez, and the Supreme Court Database. Some of the decks also contain other potentially useful information — check out the Brown v. Plata deck as an example (make sure you tap your right arrow key!).
As the above map demonstrates, legal hacking is a collective activity. If the map helps at all, it is only because it leverages free resources provided by great organizations doing great work. The HTML Deck platform is an especially cool free resource created by Dave Zvenyach, the 2014 DC Legal Hacker of the Year. His example should inspire us all to tinker and build and seek creative solutions. […]
For the map and more details, please see the complete post.
Chris Marsden of the University of Sussex has posted Open Access to Law – How Soon? at the site of the Society for Computers and Law.
Here is the introduction to the post:
Professor Chris Marsden explains what is behind the Openlaws.eu project and explores the current landscape of access to law in the UK.
The law is a slow-moving beast, as are most lawyers (members of this august Society obviously excepted). Yet with more non-professionals appearing before the courts, in an ever more litigious society, but with fewer resources to engage legal professionals, learning something of the law is more important than ever. In a Knowledge Society, citizens can now access information about their surgeon, their school, their university professor, their neighbours – but not the law, with few exceptions. This is untenable; governments worldwide, together with legal professionals and scholars, have in the past two decades made plans to move towards open access to law via the Internet. This article explores how far the English law has moved, and what remains to be done. It concludes by explaining the pan-European openlaws.eu project, which is releasing its beta version in a Salzburg code camp on 20-21 March (the hills may well be alive with the sound of legal hacking). […]
The event was organized by the UCLA Department of Information Studies and the Police Officer Involved Homicides Project (@POIHomicides).
The Website for the event is at: poihomicides.org/
The schedule for the event is available at: http://ift.tt/1vwoNsW
The data worked on at the event are available at: http://ift.tt/1vwoNt0
Twitter hashtags for the event included #endbrutality and #epbhackathon
Click here for a storify of images and Twitter tweets from the event.
Here is a description of the event, from the event’s Website:
[…] Join us for the UCLA Information Studies Department’s Hackathon on Police Brutality Data in Los Angeles County. The event looks to engage those interested in community organizing, social justice, hacking, data mining with the unique and imperfect data sets regarding police brutality in LA County. This hackathon will examine how police brutality data is captured and disseminated among state and federal organizations and work to bring attention to the existing data, as well as the holes in existing data through data mining and visualization. […]
HT @miriamkp
Soufiane El Jelali , Elisabetta Fersini , and Enza Messina have published Legal retrieval as support to eMediation: Matching disputant’s case and court decisions , forthcoming in Artificial Intelligence and Law .
Here is the abstract:
The perspective of online dispute resolution (ODR) is to develop an online electronic system aimed at solving out-of-court disputes. Among ODR schemes, eMediation is becoming an important tool for encouraging the positive settlement of an agreement among litigants. The main motivation underlying the adoption of eMediation is the time/cost reduction for the resolution of disputes compared to the ordinary justice system. In the context of eMediation, a fundamental requirement that an ODR system should meet relates to both litigants and mediators, i.e. to enable an informed negotiation by informing the parties about the rights and duties related to the case. In order to match this requirement, we propose an information retrieval system able to retrieve relevant court decisions with respect to the disputant case description. The proposed system combines machine learning and natural language processing techniques to better match disputant case descriptions (informal and concise) with court decisions (formal and verbose). Experimental results confirm the ability of the proposed solution to empower court decision retrieval, enabling therefore a well-informed eMediation process.
Marc Lauritsen of Capstone Practice Systems has published an article entitled On balance , forthcoming in Artificial Intelligence and Law .
Here is the abstract:
In the course of legal reasoning—whether for purposes of deciding an issue, justifying a decision, predicting how an issue will be decided, or arguing for how it should be decided—one often is required to reach (and assert) conclusions based on a balance of reasons that is not straightforwardly reducible to the application of rules. Recent AI and Law work has modeled reason-balancing, both within and across cases, with set-theoretic and rule- or value-ordering approaches. This article explores a way to model balancing in quantitative terms that may yield new questions, insights, and tools.
The publisher notes:
This work is based on an earlier work: “On Balance,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law. […] http://ift.tt/1viz63K
MWAIL 2015: ICAIL Multilingual Workshop on AI and Law
Daniel Martin Katz and Michael Bommarito of Michigan State University and its ReInvent Law Laboratory have launched a Website for their course entitled Legal Analytics.
The site includes course materials, slides, and code.
The site is described in their recent post at Computational Legal Studies.
Here is the course description, from the course Website:
This intro class is designed to train students to efficiently manage, collect, explore, analyze, and communicate in a legal profession that is increasingly being driven by data.
Our goal is to imbue our students with the capability to understand the process of extracting actionable knowledge from data, to distinguish themselves in legal proceedings involving data or analysis, and assist in firm and in-house management, including billing, case forecasting, process improvement, resource management, and financial operations.
This course assumes prior knowledge of statistics, such as might be obtained in Quantitative Methods for Lawyers or through advanced undergraduate curricula. This class is not for everyone; for many, it will prove to be challenging. With that warning, we encourage you to consider your interest and career aspirations against the unique experience and value of this class. To our knowledge, this is the only existing class that teaches these quantitative skills to lawyers and law students. […]
Video has been posted of John O. McGinnis ‘s presentation entitled Computational Jurisprudence , given 29 January 2015 at Stanford Law School, as part of the Stanford CodeX Speaker Series.
Here is the description from the event announcement:
Professor John O. McGinnis will discuss two issues. First, he will describe how machines are coming to disrupt the legal profession. Superstars and specialists in fast changing areas of the law will prosper — and litigators and counselors will continue to profit — but the future of the journeyman lawyer is insecure. He will then discuss in more detail the future of legal search in a world of increasing machine intelligence and its impact on jurisprudence. The key to progress in creating a better computerized legal search engine is to reduce the signal to noise ratio in the link between the user and the search engine. As this ratio decreases, legal search translates the uncompressed form of legal information into an algorithm for predicting the law. The ongoing improvement in legal search is likely to change the optimal form of the law by changing the cost of finding it. In particular, exponential increases in computational power make standards relatively more attractive than rules by decreasing the costs of standards’ application.
The presentation appears to be based on two papers:
HT Roland Vogl
Ryan Whalen of Northwestern University has posted The Law Prof Twitter Network 2.0 .
Here are excerpts from the description:
[…] I wrote a short script to read all of the law prof twitter handles included in [Bridget Crawford’s Law Prof Twitter] census (along with a few provided by others individually) and query the twitter API to get the follower lists and statistics for each user. This allowed me to both rank law prof twitterers (because we all know how much people like to rank things) and project them onto an interactive network so we can see how they relate to one another.
To view the network, click through the image below. The law prof network (consisting of following relationships amongst law profs in the census) has 583 nodes and 20709 edges (directed density = 0.061). The entire network (including all of the followers of all of the law profs) is much larger. In total there are 795,399 unique twitter users who follow law profs. […]
Colors in the network correspond to communities detected using the Louvain method. Modularity in the network is 0.336, with 4 communities and a few solo nodes/pairs. […]
For the network diagram, distribution figure, tables, and more details, please see the complete post.
James Ming Chen of Michigan State University has posted a newly revised version of Modeling Citation and Download Data in Legal Scholarship .
This revised version of the paper covers data through 2014.
Here is the abstract:
Impact factors among law reviews provide a measure of influence among these journals and the schools that publish them. Downloads from the Social Science Research Network (SSRN) serve a similar function. Bibliometrics is rapidly emerging as a preferred alternative to more subjective assessments of academic prestige and influence. Law should embrace this trend.
This paper evaluates the underlying mathematics of law review impact factors and per-author SSRN download rates by institution. Both of these measures follow the sort of stretched exponential distribution that characterizes many right-skewed distributions found in the natural and social sciences. Indeed, an ordinary exponential distribution — that is, a stretched exponential distribution with an exponent of 1 — generates strikingly accurate, even beautiful, models of both phenomena. Mindful of physicist Hermann Weyl’s admonition that any choice between truth and beauty should favor beauty, I freely admit to sacrificing some marginal improvement in the descriptive accuracy of my model in order to develop the elegant mathematics of the ordinary exponential distribution.
Further elaboration of this model of law review impact factors as an exponential distribution yields the Gini coefficient of the secondary legal literature, to the extent that each journal’s influence is expressed by its impact factor. An identical analysis applies to law school prestige as measured by per-author download rates on SSRN. The remarkable result of this inequality computation is that the Gini coefficient of legal academia, when prestige in this field is modeled according to an ordinary exponential distribution, is exactly 1/2. That outcome that is determined analytically rather than empirically. The inverse Simpson index similarly reflects an exact twofold reduction in the second-order diversity of law review publishing and academic prestige as measured through SSRN downloads. I conclude that modeling law review impact factors and SSRN download rates according to ordinary exponential distributions gives rise to a powerful mathematical tool for assessing influence among law journals and law schools.
HT @chenx064