Python Web services, JSON, and ISS Oh My!

11/14/2017

In this post I will talk about how to handle JSON data from an external API utilizing python. Making calls to web services is made simple with python, with just a few lines of code you can track the International Space Station’s (ISS) position and time, realtime with a sleek graphical user interface. The following is a link to the project files download, https://codeclubprojects.org/en-GB/python/iss/.

The Turtle module is an object oriented graphics tool that draws to a canvas or screen. Turtle’s methods derived include forward(), backwards(), left() and right() like telling a turtle in what direction to draw. Turtle will draw over a NASA curated 2D map of Earth, so you should place the ‘map.jpg’ file in your project directory.

So one of the first things we need to do is instantiate a turtle screen with the following command.

# turtle provides a simple graphical interface to display data
# we need a screen to plot our space station position
import turtle
screen= turtle.Screen()

The image size is 720w by 360h so our turtle screen size should fit the image size.


# the image size is 720w x 360h
screen.setup(720,360)
# set coordinates to map longitude and latitude
screen.setworldcoordinates(-180,-90,180,90)
# set background picture to NASA world map, centered at 0
screen.bgpic('map.jpg')

 

iss

To represent the ISS on the 2D map let’s choose an image, it doesn’t have to be the following icon but it’s a nice icon so Houston we have liftoff!

# adds turtle object with name iss to list of objects
screen.register_shape('iss.png')
iss= turtle.Turtle()
iss.shape('iss.png')

 

Our location object will tell turtle to write the ISS png file to the screen at a specific position given the latitude and longitude of the ISS. Instantiate a Turtle() to create an object with the following code.

 

# location object for turtle to plot
location= turtle.Turtle()

# used later to write text
style=('Arial',6,'bold')
location.color('yellow')

Now, before we can tell our turtle to draw the ISS overhead-time we need the actual latitude and longitude coordinates of the passing ISS. A quick google search gives us the coordinates to store in a dictionary.

# Cape Canaveral ---> 28.392218, -80.607713
# Central Park, NYC ---> 40.782865, -73.965355
# create python dictionary to iterate and plot time of overhead location
coords={}
coords['nasa_fl']=(28.523397, -80.681874)
coords['centralp']=(40.782865, -73.965355)

To call the api we first need the url, ‘http://api.open-notify.org/astros.json,’ this will tell the api to give us the data we need to extrapolate the ISS data.

import urllib.request
import json
url='http://api.open-notify.org/astros.json'
response=urllib.request.urlopen(url)
result=json.loads(response.read())
print(result['people'])

Then to make the call to the url use urllib.request to access the url, querying for each given location. The data is then stored as a result,  loaded in json format. Json stands for JavaScript Object Notation and is used to conveniently organize data.

Screenshot (76)

The lines above are the contents of the json data, data is accessed similar to a python dictionary utilizing keys and indices.

import time

# setup loop to iterate and plot when the iss will be at the plotted location.
for k,v in coords.items():
 pass_url= 'http://api.open-notify.org/iss-pass.json'
 pass_url= pass_url+'?lat='+str(v[0])+'&lon='+str(v[1])
 pass_response= urllib.request.urlopen(pass_url)
 pass_result= json.loads(pass_response.read())
 over=pass_result['response'][1]['risetime']
# write turtle at new location coords
 location.penup()
 location.color('yellow')
 location.goto(v[1],v[0])
 location.write(time.ctime(over), font=style)
 location.pendown()

The above code block makes a call to the api, loads the json data, parses the overhead pass time (when the iss will be over the specified position) and then plots the time at the given location.

Screenshot (77)

# init current loc off iss coord
# make call to api
loc_url= 'http://api.open-notify.org/iss-now.json'
loc_response=urllib.request.urlopen(loc_url)
loc_result=json.loads(loc_response.read())</pre>
# the coords are pcked into jso, iss_position key
location= loc_result['iss_position']
lat= float(location['latitude'])
lon= float(location['longitude'])
<pre># set up while loop to plot moving iss
while(1):
# iss loc updates approx 3 sec
 time.sleep(1.5)

# update call to webservice to get new coords
 loc_url= 'http://api.open-notify.org/iss-now.json'
 loc_response=urllib.request.urlopen(loc_url)
 loc_result=json.loads(loc_response.read())
 location= loc_result['iss_position']
 lat= float(location['latitude'])
 lon= float(location['longitude'])
# write turtle at new location coords
 iss.setheading(90.0)
 iss.penup()
 iss.goto(lon,lat)
 iss.pendown()

 

The above code block makes a call to the api, loads the json data, parses the overhead position at the current geographic coordinates and plots the iss icon. The while loop is infinite to constantly track the iss.

Screenshot (75)

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Topic Discovery in python!

07/23/2017

So I still haven’t figured out if I want to make one blog post a week or make more than one post a week but I will try to effectively post at least once a week on topics in computer science. We’ll see where it goes, it will be very exciting and most certainly worth the click.

This week I plan on exploring a data set of over 5,000 film entries scraped from imdb in an effort to briefly discuss machine learning, particularly Latent Dirichlet Allocation. I will not go into any of the theory because that is beyond the scope of this blog, these aren’t the droids you’re looking for.

However, nltk and gensim provide extensive apis that enable processing human language. Anything from stemming down to root words and or tokenizing a document to perform further analysis it is made easy with the above modules.

 


import pandas as pd
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
import numpy as np
import matplotlib.pyplot as pp
import re

 

Let’s start by reading in the csv file, movie_metadata.csv. A link to the kaggle download is commented in the code below.

 


## https://www.kaggle.com/deepmatrix/imdb-5000-movie-dataset
movie=pd.read_csv("movie_metadata.csv")
movie.head()

Screen Shot 2017-07-23 at 4.27.21 PM

 

 Latent Dirichlet Allocation is used to estimate word topic assignments and the frequency of those assignments for a fixed number of words called documents. Let’s assume each document exhibits multiple topics. So we will be looking at columns plot_keywords and genres.

 


movie['plot_keywords']

 

Next let’s remove the pipe with some list comprehension and check if successful.

 


keyword_strings=[str(d).replace("|"," ") for d in movie['plot_keywords']]
keyword_strings[1]

Screen Shot 2017-07-23 at 4.27.29 PM

Good!

 

Stemming reduces words down to their root word and is particularly useful in developing insightful NLP models.

 

docs=[d for d in keyword_strings if d.count(' ')==5]
len(docs)
texts=[]

#create english stop words list
en_stop= get_stop_words('en')

# create p_stemmer of class PorterStemmer
# stemmer reduces words in a topic to its root word
p_stemmer= PorterStemmer()

# init regex tokenizer
tokenizer= RegexpTokenizer(r'\w+')

# for each document clean and tokenize document string,
# remove stop words from tokens, stem tokens and add to list
for i in docs:
  raw=i.lower()
  tokens=tokenizer.tokenize(raw)
  stopped_tokens=[i for i in tokens if not i in en_stop]
  stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
  texts.append(stemmed_tokens)

 

The next block of code transforms the granular data into sets of identifiable tokens to manipulate later. To do so, let’s create a dictionary for the terms and value and a matrix for each document and term relationship.

 

# turn our tokenized docs into a key value dict
dictionary= corpora.Dictionary(texts)
# convert tokenized docs into a doc matrix
corpus=[dictionary.doc2bow(text) for text in texts]

 

The immediate next line of code generates the Latent Dirichlet Allocation model taking the corpus, the number of topics and the number of training iterations. Printing the model we see there is an estimate of observed words assigned to each topic, effectively (or ineffectively) predicted.

 

ldamodel=gensim.models.ldamodel.LdaModel(corpus,num_topics=2,id2word=dictionary,passes=20)
print(ldamodel.print_topics(num_topics=2,num_words=5))

screen-shot-2017-07-23-at-4-30-46-pm.png

Let’s parse this data into something we can handle. We will also combine both topics into one array to get a nice plot and then plot the data.

 

top=ldamodel.print_topics(num_topics=2,num_words=5)
topic_num=[]
topic_str=[]
topic_freq=[]

for a in top:
  topic_num.append(a[0])
  topic_str.append(" ".join(re.findall(r'"([^"]*)"',a[1])))
  w0,w1,w2,w3,w4=map(float, re.findall(r'[+-]?[0-9.]+', a[1]))
  tup=(w0,w1,w2,w3,w4)
  topic_freq.append(tup)

words0=topic_str[0].split(" ")
words1=topic_str[1].split(" ")
words=words0+words1

worddict0=dict(zip(words0,topic_freq[0]))
worddict1=dict(zip(words1,topic_freq[1]))

sorted_list0 = [(k,v) for v,k in sorted([(v,k) for k,v in worddict0.items()])]
sorted_list1 = [(k,v) for v,k in sorted([(v,k) for k,v in worddict1.items()])]y_pos = np.arange(5)

freqs=[a[1] for a in sorted_list0]
ws=[a[0] for a in sorted_list0]
freqs1=[a[1] for a in sorted_list1]
ws1=[a[0] for a in sorted_list1]

pp.bar(y_pos, freqs, align='center', alpha=0.5, color=['coral'])
pp.xticks(y_pos, ws)
pp.ylabel('word contributions')
pp.title('Predicted Topic 0 from IMDB Plot Keywords')
pp.show()

pp.bar(y_pos, freqs1, align='center', alpha=0.5, color=['coral'])
pp.xticks(y_pos, ws1)
pp.ylabel('word contributions')
pp.title('Predicted Topic 1 from IMDB Plot Keywords')
pp.show()</pre>
<pre>

i1

i2

This process then can be repeated for any genre of film in the imdb data set.

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Solar Radiation Prediction

07-21-2017

Sci-kit learn is a fantastic set of tools for machine learning in python. It is built on numpy, scipy, and matplotlib introduced in the first py-guy post and makes data analysis and visualization simple and intuitive. sci-kit learn provides classification, regression, clustering, dimensionality reduction, model selection, and preprocessing algorithms making data analysis in python accessible to everyone. We will cover an example of linear regression in this weeks post exploring Solar Radiation data from a NASA hackathon.

First after importing packages let’s read in the SolarPrediction.csv data set. The link to the data set is commented in the code block.


 

Taking a first look at the data set, specifically, UNIXTime and Date, note it is not formatted to a particular type so we will look at this later.

headshape.png

 

df.shape
df.describe()

Calling the describe method on the data frame returns some descriptive statistics on the data set and tells us there might be a relationship between radiation, humidity and or temperature.

descr

So let’s look at a correlation plot to get a better feel for any possible relationships.

truthmat= df.corr()
sns.heatmap(truthmat, vmax=.8, square=True)

matrix

There is a strong relationship between radiation and temperature (unsurprisingly or surprisingly) so let’s choose two features with some ambiguity. Pressure and Temperature will do fine, we will use seaborn, a statistical visualization library based on matplotlib to explore the relationship between the two features.

p = sns.jointplot(x="Pressure", y="Temperature", data=df)
pp.subplots_adjust(top=.9)
p.fig.suptitle('Temperature vs. Pressure')

 

temp_press.png

There is a clear positive trend albeit noisy because of the low pressure gradient. Lets do some quick feature engineering to get a better look at the trend.

 

#Convert time to_datetime
df['Time_conv'] = pd.to_datetime(df['Time'], format='%H:%M:%S')

#Add column 'hour'
df['hour'] = pd.to_datetime(df['Time_conv'], format='%H:%M:%S').dt.hour

#Add column 'month'
df['month'] = pd.to_datetime(df['UNIXTime'].astype(int), unit='s').dt.month

#Add column 'year'
df['year'] = pd.to_datetime(df['UNIXTime'].astype(int), unit='s').dt.year

#Duration of Day
df['total_time'] = pd.to_datetime(df['TimeSunSet'], format='%H:%M:%S').dt.hour - pd.to_datetime(df['TimeSunRise'], format='%H:%M:%S').dt.hour
df.head()

First we will convert to date time to manipulate later then add hour, month and year columns for a granular scope. Much Better!

screen-shot-2017-07-21-at-8-05-13-pm.png

With sklearn linear regression we can train python to model the data and then test the model for its accuracy. We will drop temperature column from the dependent variables  because that is what we want to learn.

 

y = df['Temperature']
X = df.drop(['Temperature', 'Data', 'Time', 'TimeSunRise', 'TimeSunSet','Time_conv',], axis=1)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)

Now let’s predict the temperature given the features.

 

X.head()
predictions = lm.predict( X_test)
pp.scatter(y_test,predictions)
pp.xlabel('Temperature Test')
pp.ylabel('Predicted Temperature')

linreg.png

MSE and RMSE values tell us the there is significance and the model performed well and as you can see there is a positive upward trend centered around the mean.

print(metrics.mean_squared_error(y_test, predictions))
print(np.sqrt(metrics.mean_squared_error(y_test, predictions)))

Screen Shot 2017-07-21 at 8.16.00 PM

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Note: I referenced kaggler Sarah VCH’s notebook in making todays blog post, specifically the feature engineering code in the fifth code block. If you want to see her notebook I’ve listed the link below.

https://www.kaggle.com/sarahvch/investigating-solar-radiation

First blog post ~ python packages

07-14-2017

Welcome to py-guy! py-guy blog explores science, culture and technology with simple examples and thoughtful discussions. For the first post I will talk about why python is a useful programming language and some nifty things python can do while exploring the MOMA data set. The Museum of Modern Art collection is an excellent data set containing title, artist, date, medium etc. of every artwork in the Museum of Modern Art and is perfect for the scope of this post. To download the data set and run your own analysis I’ve listed the link below.

https://www.kaggle.com/momanyc/museum-collection

Python seamlessly enables all stages of data manipulation and utilizing matplotlib, numpy, and pandas packages streamlines the process of intuitive data analysis. At first I felt cheated that I could just import a package to run all the calculations without knowing any of what is going on under the covers but after my first few modules I can say these packages are powerful components in the py-guy toolbox.

import math, json, collections, itertools
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as pp

arts=pd.read_csv("artworks.csv",names=['id','title','artist-id','name','date','medium','dimensions','aquisition-date','credit','catalogue','department','classification','object-number','diameter','circumference','height', 'length', 'width', 'depth', 'weight', 'duration'],dtype='str')
arts.head()

With pandas there is a sort method you can call on any data frame to sort in ascending or descending order. Pandas enhances numpy by including data labels with descriptive indices, robust handling of common data formats and missing data, and relational databases operations.


df=pd.DataFrame(arts)
df.sort_values('date')
df['date']=pd.to_numeric(df['date'], errors='coerce')
df.sort_values('date')

romanticism= df[(df['date']>=1790) & (df['date']<=1880)]
modern= df[(df['date']>=1860) & (df['date']<=1945)]
contemporary= df[(df['date']>=1946) & (df['date']<=2017)]

df1=romanticism.sort_values('date')
df1[-5:] # check if successful

Then using matplotlib set a histogram for dates, setting the bins to the range of art periods to plot a histogram of the given data set.

 

# list comprehension to pull only dates of type float from df
dat=[d for d in df['date'] if np.isnan(d)==False]

# set plot
pp.hist(dat,bins=range(1790,2017))
pp.ylabel('Number of Artworks')
pp.xlabel('Year')
pp.title('Artworks per Year')

artworksPerYear

Python language is expressive in its readability and simplicity.  In only a few lines of code you can read, manipulate and plot data.

 

# according to wikipedia art periods are defined by the
# development of the work of an artist, groups of artists or art movement
# Romanticism -1790 - 1880
# Modern art - 1860 - 1945
# Contemporary art - 1946–present

periods = ('Romanticism','Modern','Contemporary')
y_pos = np.arange(3)
arts = [romanticism.size,modern.size,contemporary.size]

pp.bar(y_pos, arts, align='center', alpha=0.5, color=['coral','yellow','teal'])
pp.xticks(y_pos, periods)
pp.ylabel('Artworks')
pp.title('Pieces per Movement')

pp.show()

piecesPerMvt

Using collections and list comprehensions is just another powerful component python has to offer. I will make another blog post on python collections and list comprehensions but for now here is a quick example illustrating their utility.


# make a list comprehension
nam=[n for n in df['name']]

# using the from collections import Counter
name_art=Counter(nam)
# above line is equivalent to collections.Counter(nam)

# sort the collection by most artworks
mc=name_art.most_common(10)

artists=[artist[0] for artist in mc]
common_arts=[arts[1] for arts in mc]

Let’s try a horizontal bar chart with ‘barh.’

y_pos = np.arange(len(common_arts))
pp.figure(figsize=(10, 3))
pp.barh(y_pos, common_arts, align='center', alpha=0.5)
pp.yticks(y_pos, artists)
pp.xlabel('Number of Artworks')
pp.title('Top 10 Artists with most pieces in Moma')
pp.show()

 

topTenArtistPieces

Similarly this process can be repeated for different variables and scopes returning some interesting results.

arts=pd.read_csv("artworks.csv",names=['id','title','artist-id','name','date','medium','dimensions','aquisition-date','credit','catalogue','department','classification','object-number','diameter','circumference','height', 'length', 'width', 'depth', 'weight', 'duration'],dtype='str')
df=pd.DataFrame(arts)

cls=[c for c in df['classification']]
cls_count=collections.Counter(cls)

clsCol=cls_count.most_common()
clsArr= [c[0] for c in clsCol]
numCls=[c[1] for c in clsCol]
y_pos = np.arange(len(clsArr))

pp.figure(figsize=(10, 20))
pp.barh(y_pos, numCls, align='center', alpha=0.5)
pp.yticks(y_pos,clsArr)
pp.ylabel('Classification')
pp.xlabel('Number of Artworks')
pp.title('Classication of Artworks')
pp.show()

medium