Predicting The Road Accident Fatality Likelihood

Introduction:

The costs of fatalities and injuries due to traffic accidents have a great impact on the society. In recent years, researchers have paid increasing attention to determining factors that significantly affect severity of driver injuries caused by traffic accidents. Applying data mining techniques to model traffic accident data records can help to understand the characteristics of driver's behaviour, roadway condition and weather condition that were causally connected with different injury severity.

Why This Project:

Motor Vehicles Are Second-Leading Cause of Death Due to Injury. The United States accounts for most number of road accidents when compared to other countries. The proportion is very high when compared to other countries.

Project Objective :

The objective of our project is as follows
● Find the factors affecting the road accident fatality in Seattle and Boston
● Build a model to predict the road accident based on climate and other important variables

About The Data :

The data for Seattle and Boston are scraped separately.

Data For Seattle :

Wunderground Historical Weather Data - The data was scraped using BeautifulSoup. The data consists of weather information such as temperature, fog, wind speed, humidity, precipitation. The code for scraping Wunderground data is as follows
Scraping Weather Data - Code
# -*- coding: utf-8 -*-
"""
Created on Sat Sep 10 17:17:21 2016
@author: venkatesh
"""

import requests
from bs4 import BeautifulSoup
import datetime
from dateutil import parser
import re
import pandas as pd
import imp
import os


CWD = os.path.dirname(os.path.abspath(__file__))
DOWNLOAD_PATH = CWD + 'weather.csv'

class WundergroundScraper(object):
	'''
	Takes a designated Wunderground city key and quickly allows you to download
	 historical weather information between a range of dates.
	Usage:
	wunder = ws.WundergroundScraper()
	wunder.download_date_range('2009-06-17', '2015-09-29')
	'''

	def __init__(self, city='KBFI'):
		'''
		INPUT:
			city -> string; Wundeground city key
		Initiates the Wunderground scraper class. 
		'''
		self.city = city
		self.url = 'http://www.wunderground.com/history/airport/'\
				   '{a}/{y}/{m}/{d}/DailyHistory.html'
		self.data = []

	def download_date_range(self, start_dt, end_dt, f_path=DOWNLOAD_PATH):
		'''
		INPUT:
			start_dt -> string; start date of scrape
			end_dt -> string; end date of scrape
			f_path -> string; path to save scraped data
		For each date in the given date range, scrape and format the historical
		weather data.  Then save to a csv.
		'''
		end_dt = parser.parse(end_dt)
		start_dt = parser.parse(start_dt)
		diff = end_dt - start_dt
		dates = [end_dt - datetime.timedelta(days=x) \
						  for x in xrange(1, diff.days)]
		for d in dates:
			table = self._make_request(d.year, d.month, d.day)
			header = self._get_header(table)
			self._write_data(str(d.date()), table, header)
		self._save_to_csv(f_path)
		

	def _make_request(self, year, month, day):
		'''
		INPUT:
			year -> int; year of the scrape
			month -> int; month of the scrape
			day -> int; day of the scrape
		OUTPUT:
			soup object; table content of interest from the scrape
		Make url request and retrieves the html text, the returns the table of 
		interest.
		'''
		url = self.url.format(a=self.city, y=year, m=month, d=day)
		r = requests.get(url)
		soup = BeautifulSoup(r.text)
		table = soup.findAll('div', {'id': 'observations_details'})
		return table[0]

	def _get_header(self, table):
		'''
		INPUT:
			table -> soup object; table content of interest from the scrape
		OUTPUT:
			list; column name of headers
		Retrieves and returns the headers of the table of interest.
		'''
		data = ['date']
		for header in table.findAll('th'):
			for h in header.strings:
				if '(' not in h:
					data.append(h.strip())
		return data

	def _write_data(self, date, table, header):
		'''
		INPUT:
			date -> string; date of table scraped
			table -> soup object; scraped table of interest
			header -> list; header of scraped table of interest
		Retrieves and returns the headers of the table of interest.
		'''
		for row in table.findAll('tr', {'class': 'no-metars'}):
			data = [date]
			for col in row.findAll('td'):
				content = col.text.strip('\n').strip().encode('utf-8')
				data.append(content)
			self._data_to_dict(header, data)

	def _data_to_dict(self, header, row):
		d = dict()
		for i, h in enumerate(header):
			d[h] = row[i]
		self.data.append(d) 

	def _save_to_csv(self, f_path):
		df = pd.DataFrame(self.data)
		df.to_csv(f_path, index=False)
		print '{0} downloaded.'.format(f_path)
		
wunder = WundergroundScraper()
wunder.download_date_range('2009-01-01', '2016-09-01') 
                                                 

Socrata 911 Response Data :

The 911 response data consists of accident information such as type of incident, date, time, latitude, longitude, number of people injured. The code for scraping the 911 response data is as follows
Scraping Response Data - Code
# -*- coding: utf-8 -*-
"""
Created on Sat Sep 10 17:21:02 2016
@author: venkatesh
"""

import requests
import os


class SocrataConnection(object):

	def __init__(self, url, token, limit):
		self.url = url
		self.token = token
		self.limit = limit
		self.headers = sorted(self._headers())
		self.primary_id = None

	def _headers(self):
		query = '{0}/?$limit=1'.format(self.url)
		r = requests.get(query)
		return r.json()[0].keys()

	def get_headers(self):
		return self.headers

	def get_rowcount(self):
		if self.primary_id is None:
			print 'Please set primary id first. (Use .set_primary_id("id")'
			return None

		query = '{0}?$select=count({1})'.format(self.url, self.primary_id)
		result_field = 'count_{0}'.format(self.primary_id)
		r = requests.get(query)
		return int(r.json()[0][result_field])

	def set_primary_id(self, primary_id):
		if primary_id in self.headers:
			self.primary_id = primary_id
			self.headers.remove(primary_id)
			self.headers.insert(0, primary_id)
		else:
			print 'Column does not exist'

	def download_csv(self, rows=None, file_name='untitled.csv', headers=True):
		if rows is None:
			rows = self.get_rowcount()

		if self.primary_id is None:
			print 'Please set primary id first. (Use .set_primary_id("id")'
			return None

		if os.path.exists(file_name):
			os.remove(file_name)

		if headers:
			self._write_csv_row(dict(zip(self.headers, self.headers)),
								file_name)

		if self._write_to_csv(rows, file_name):
			print 'Download Complete'
		else:
			print 'Download Incomplete'


	def _write_to_csv(self, rows, file_name):
		offset = 0
		link = '{0}?$$app_token={1}&$order={2} DESC&$limit={3}&$offset={4}'
		for i in xrange((rows / self.limit) + 1):
			query = link.format(self.url, self.token, self.primary_id,
								self.limit, offset)
			r = requests.get(query)
			if r.status_code == 200:
				for row in r.json():
					self._write_csv_row(row, file_name)
				offset += self.limit
			else:
				return False
		return True

	def _write_csv_row(self, row, file_name):
		data = []
		for col in self.headers:
			try:
				item = '"' + str(row[col]).strip() + '"'
			except:
				item = ' '
			data.append(item)
		with open(file_name, 'a') as out_file:
			out_file.write(','.join(data) + ' \n')


class SocrataAPI(object):

	def __init__(self, token, limit=50000):
		self.token = token
		self.limit = limit

	def request(self, url, show_details=True):
		return SocrataConnection(url, self.token, self.limit)
		
token = 'Opp5gz1KaGplrPjbqnSsWkqHB'
api = SocrataAPI(token)
r = api.request('https://data.seattle.gov/resource/pu5n-trf4.json')
headers = r.get_headers()
r.set_primary_id('cad_cdw_id')
r.download_csv(file_name='raw_911_response.csv')
												

Data For Boston:


Cambridge Accidents Data :

This is a open source data.

Predictive Modeling:

Visualizing The Workflow For Seattle:

The Wunderground Historical Weather Data and Socrata 911 Response Data are merged together by Date in the scrape modules. Once the data is transformed, Exploratory Data Analysis is performed in python. Then the machine learning algorithms are applied to the data.

Key Insights :

The plot shows that few variables have linear relationship between the variables. But most of them have a scattered points plot which means that there is randomness associated with the data. The randomness is a big concern when it comes to predictive with the data.

Model Summary :

The models performed are Logistic Regression, Random Forest and Gradient Boosting. The log of the best models in each category is as follows
The maximum accuracy is obtained from Random Forest.

Grid Search :

Parameters that are not directly learnt within estimators can be set by searching a parameter space for the best Cross Validation with evaluating the estimator performance score. Using the GRID SEARCH for optimizing these hyperparameter of a Random Forest, the details of the model output is as follows
The accuracy increased from 65% to 72% by using grid search.

Visualizing The Workflow For Boston :

Cambridge Government Data is considered as bad data because it doesnot have any predictive variables present in it. The data such as latitude and longitude along with time can be used to find patterns. Exploratory Data Analysis is done with Tableau. The model building is done with R.

Exploratory Data Analysis :

Map based on Accidents Occurred In Boston:
The distribution of road accidents based on location is as follows
Based on the map, we can find that Massachusetts Avenue and Cambridge Street accounts for more accidents compared to other locations.
Distribution Of Accidents Based on Day:
The distribution of accidents occurred over the week is as follows
Thursday and Friday holds the most number of accident compared to other days. Sunday is the safest day. The reason may be because of the number of cars travelling might be less compared to other days or in other words traffic might be less.
Distribution Of Accidents Based on Time
The Auto, Passenger Car and Light Truck vehicles such as van and mini van are the most dangerous vehicles

Bubble Chart of Automobile Which Hit :


The Auto, Passenger Car and Light Truck vehicles such as van and mini van are the most dangerous vehicles

Bubble Chart of Automobile Which Got Hit :

The Auto, Parked Vehicle and Bicycle are the vehicles which gets most hit by accident.
Un-Supervised Learning Using Hierarchical Clustering:
Based on the Within Sum of Squares plot, we can decide the number of clusters to be 4. ( 4 clusters because we chose based on Knee Rule)
The hierarchical clustering dendogram plot with the percentage coverage is
The AU is nothing but Approximately Unbiased and BP is Bootstrap p value. The tree is constructed based on euclidean distance.

Time Series Forcasting – Number Of Accidents Over The Years:


The model summary of the time series forecasting is as follows
The estimated forecast for the upcoming years is as follows
Future Scope
Build a live app for predicting the likelihood of road accident based on live data
Find ways to account for randomness


About the author: Venkatesh Subramaniam

Leave a comment

www.000webhost.com