marți, 3 iunie 2014

Katz and Bommarito’s research on measuring legal complexity is featured in Wired Science

Daniel Martin Katz and Michael Bommarito ‘s paper, entitled Measuring the Complexity of the Law: The United States Code , is the focus of a new post by Samuel Arbesman at Wired Science : Measuring the Complexity of the Law .


Here are excerpts from the post:



[...] But there are many other anthropic systems that might be considered technologies. And this includes our laws. Essentially, our legal codes are complicated technologies, growing and becoming more interconnected over time.


But how complicated are our laws? In a working paper titled “Measuring the Complexity of the Law: The United States Code”, Daniel Katz and Michael Bommarito of Michigan State University recently set out to measure exactly that. They attempted to quantitatively measure the complexity of the United States Code, using what is roughly a metric for how hard it is to understand it. The U.S. Code is essentially the collection of all federal laws, and consists of 51 Titles, or sections, that each deal with different topics. For example, Title 11 is related to bankruptcy, Title 26 is our tax code, and Title 39 deals with our postal service. Since there are many sections with different topics and styles, comparing the complexity of different Titles is a natural means of examining differential legal complexity. But how to do so?


While there are many ways to measure complexity, Katz and Bommarito focused on rule search and rule assimilation which are, respectively, “How complex is the task of determining the rule or set of rules applicable to the conduct in question?” and “How complex is the process of assimilating the information content of a body of legal rules?” While there are many other ways to measure legal complexity, such as the cost of compliance with laws, they chose to limit their inquiry to just how hard it is to understand this legal code.


But even by only examining this type of complexity, there are many ways to examine this. For example, the more interconnected a section of the law is, the harder it is to truly assimilate and understand the law’s implications. [...] So one way of looking at complexity is by seeing how many connections a section has to other sections.


And they looked at a whole variety of other metrics as well. They examined the hierarchical depth of the different Titles (section, subsection, etc.), in order to see how people might navigate its structure; the number of words in different Titles; and even the average word size. [...]



For more details, please see the post and the original paper.


HT Dan Katz




Filed under: Applications, Data sets, Others' scholarly or sophisticated blogposts, Research findings, Software, Technology developments, Technology tools Tagged: Computational Legal Studies, Daniel Martin Katz, Knowledge acquisition approach to legal complexity, Knowledge acquisition model of legal complexity, Legal complexity, Legal Information Institute at Cornell University, Legal knowledge acquisition, Legislative data, Legislative information systems, Michael Bommarito, Michael James Bommarito, Modeling legal complexity, Models of legal complexity, Samuel Arbesman, Statistical methods in legal informatics, Statistical methods in legal information studies, Wired Science



via Legal Informatics Blog http://ift.tt/S1vrd2

Niciun comentariu:

Trimiteți un comentariu