Delving Through 65,000 Pages of Government Submissions: The Challenges

Authors: David Ackerman, Alyssa Moore, David Chan, Tatiana Meleshko, Byron Chu

Last June, Cyberaโ€™s data scientists took on a monumental task: to create a tool that can easily examine the thousands of documents and testimonials that were provided to the CRTC for its 2016 โ€œbasic telecommunications serviceโ€ consultation, and search for key words and common language. Our goal was to use data science principles to help Canadians better understand how policy decisions are made to safeguard the internet.

The project is funded by the Canadian Internet Registration Authority (CIRA) Community Investment Program.

This has not been an easy job. Over the course of the consultation, the equivalent of ~65,000 pages or ~216 novels worth of material were published on the public record. The types of documents submitted range from Microsoft Word documents to PDFs to spreadsheets. Some written submissions read like novels, others in tightly-scripted point-form notes. Many submissions used conversational english, others were in legalese. Extracting the data and written language from the different documents, and then classifying the arguments for and against the โ€œbasic telecommunications serviceโ€ designation has been very tricky.

In this blog, we will go over some of the complications we have run into in the last 6-7 months, and what tools we have used to build our understanding. In the next month, we hope to unveil the results of this massive undertaking!

Issue #1: Navigating the CRTC site

If you were to peruse the consultation submissions on the CRTC website, your only option is to navigate through a hierarchy they set up – you often have to click through several sub-pages to get to a specific document. Also, many of the documents on the site are PDF/Word docs, which are not easy to view directly in a web browser. Even HTML form submissions need to be downloaded before you can read them.

Our Solution: Sinatra

We wrote a document browser web application (in Ruby), using the lightweight Sinatra framework. This tool enables users to navigate the submission documents based on three different criteria:

  • By date of submission

  • By associated company/organization

  • By sections of documents that match โ€œfuzzy searchesโ€ (i.e. searches for key words or phrases)

We hope to use text mining techniques like topic analysis, clustering, and named entity recognition to allow additional ways of organizing/browsing the documents.

Weโ€™re currently using the document browser to give us a high level โ€œdashboardโ€ view on the quality of the data. Our plan is refine it to answer specific questions, such as โ€œhow many, and what groups of interveners believe the internet should be a considered a basic service?โ€ and โ€œwhat do they think it should cost?โ€.

Weโ€™re trying to do this in such a way that you donโ€™t need to manually go through every document to find these answers. We also want to make it easier to trace the source text of a statement or fact put forward.

Issue #2: Finding a text match in a large trove of documents

First there is the challenge of dealing with the different format of the documents, as PDF and DOC files are not nearly as easy to extract information from as raw text or more standardized information. Then there is the challenge of organizing all the text from these documents to make them accessible through one search.

Our Solution: Neo4J

One tool we have been using is Neo4J, which is a Java-based โ€œgraph databaseโ€. It models relationships between data as connected nodes on a graph, and has been used in investigative journalism on the Paradise and Panama Papers.

The important bits for us:

  • The data model is very flexible and allows us to easily visualize whatever bits of knowledge we can extract from the documents.

  • Connections can emerge that arenโ€™t easily apparent when just reading through the documents. (Just looking at an emerging graph often invites more questions and avenues for analysis).

  • Itโ€™s very natural to query and โ€œask questionsโ€ about your data once itโ€™s in this connected form.

Solr

Weโ€™re also using the Apache Solr project to help find relevant sections of text from larger documents that we want to pull out and do further analysis on. Most important for us at the moment is its ability to do quick โ€œfuzzyโ€ matching on document contents based on queries we give it.

We feed those results back into the Neo4J graph as segments that can be traced back to the documents they come from, or the queries that generated them.

Issue #3: A lack of easy metadata about the documents

Uploading the documents is one thing, figuring out if weโ€™re grabbing the right documents (and whatโ€™s even in those documents) is another.

Our Solution: Web scraper

Weโ€™re using a Ruby-based web scraping tool to crawl through the CRTC site, keeping track of as much metadata about the documents as possible, and also retrieving those documents.

Other Issues

A major issue for us has been the different conventions used by the organizations who submitted documents, or in a particular part of the process. Thereโ€™s no quick solution for this: weโ€™re text mining (a mixture of manually coding to extract information based on conventions we identify, fuzzy matching, and unsupervised / supervised machine learning techniques) to better categorize common phrases and language used in government consultation submissions.

Weโ€™ve written up a moreย extensive document on these and other issuesย here.

Future plans

Going forward, we hope to add more ways of navigating the documents, as well as provide users with a mechanism for directly augmenting the data (perhaps through machine-learning assisted tagging). Weโ€™d also like to include methods for correcting errors in the data in a traceable, transparent manner. Ideally the entire toolset will be adaptable to navigating other public consultations and forms of policy documentation.

We plan to publicly release our tool in the coming month. If you have any suggestions or tips based on the work we outlined above, please feel free to leave a comment. Otherwise, watch this space!

Leave a Comment

Your email address will not be published. Required fields are marked *