A frequent obstacle in data journalism is when the information you want to analyse is locked away in a PDF. Here are 6 ways to tackle that problem – with space for a 7th:
1) For simple PDFs: Google Docs’ conversion facility
Google Docs recently added a feature that allows you to convert a PDF to a ‘Google document’ when you upload it. It’s pretty powerful, and about the simplest way you can extract information.
It does not work, however, if the PDF was generated by scanning – in other words if it is an image, rather than a document that has been converted to PDF.
2) For scanned documents and pulling out key players: Document Cloud
Document Cloud is a tool for journalists to convert PDFs to text. It will also add ‘semantic’ information along the way, such as what organisations, people and ‘entities’ such as dates and locations are mentioned within it, and there are some useful features that allow you to present documents for others to comment on.
The good news is that it works very well with scanned documents, using Optical Character Recognition (OCR). The bad news is that you need to ask permission to use it, so if you don’t work as a professional journalist you may not be able to use it. Still, there’s no harm in asking.
3) For scanned documents: The Data Science Toolkit
The Data Science Toolkit allows you to do lots of clever things, including converting PDFs using OCR with theFile2Text converter. Upload your document, and you’re away. Also works on other document formats, and PNGs, TIFFs and JPEGs.
4) For stripping out tables: PDF2XL
If you’re willing to shell out around £70 then PDF2XL is recommended as a useful piece of software for stripping out tables from Excel files.
5) For automating the process: Scrape from PDF to XML using Scraperwiki
Scraperwiki is a collaborative website for scraping all sorts of hard-to-find information into some sort of useful format, so it’s no surprise that PDFs are a common problem there. They have a template scraper for converting PDF documents to XML (a more structured format) – if you can understand a little bit of programming then you can try to adapt it to your own purposes.
6) If it’s held by a public body and you have time: a well-written FOI request
Do you need all the data in the PDF or just some? Is that data available elsewhere? Try an advanced search using a phrase from the data in quotes and adding filetype:xls to see if you can find the spreadsheet it comes from. Or submit an FOI request for the data stipulating that it be provided in spreadsheet or CSV (comma separated values) format (if the PDF was supplied in response to an FOI request in the first place, go back and ask for the information to be provided in spreadsheet or CSV (comma separated values) format).
It’s a good idea to also ask how the information is stored, including any software used, as you can check with the software vendor how easily the information can be extracted and bat away any excuses the body may come back at you with.
7) Add your own here
There must be others – tell me your own tips.
UPDATE: On Twitter: Simon Rogers uses Acrobat Pro; Kevin Anderson uses Omnipage. And Jack Schofield uses Zamzar.
So, my preference is not a free option but it works pretty well for us: Adobe Professional. If whoever put the table together hasn’t deliberately gone out of their way to make things difficult, you just highlight the table, control click (on a Mac, think it’s right click on a PC) and you get either "Copy as table" or "Download as a spreadsheet" as options.This means you can either copy and paste into a new spreadsheet, or it opens as a CSV.Sometimes a line break in the table can screw this up, so I often just bring in the data first, sometimes in two or more parts – and then add the headings manually afterwards.Some of my colleagues do it by saving the PDf as an xml file but that’s beyond me.As I always say, PDFs are the devil’s format and should never, EVER, be used for data. It’s fine for pictures but why organisations take a spreadsheet and put it into a PDF, I will never know.
I am in the middle of doing something similar. Another tool (once you extract the data) is Data Wrangler from Stanford.edu. Brilliant for cleaning up rows and columns of gloopy data. http://vis.stanford.edu/wrangler/
Thanks for the comment on getting data out of PDFs @kellyfincham
http://www.pdfescape.com
Pingback: Step by step: how to start in a data journalist role | Online Journalism Blog