Hello Docker!

02/11/2018

This post is about working with containers and python web applications. But first, a little bit on containers and their use in a development ecosystem. Docker is a technology that provides abstraction and automation at an operating system level offering reusability, automation control, version control, per review, and testing capabilities.
This virtualization of bare-bones operating systems into containers enables a micro-services model where all the units of work are divided into separate units of work, facilitating scalability, relability and testing. In essence, docker containers allows developers to be accountable for programming features without having to worry about machine dependencies.

The following code blocks gives a walk-through on building a dockerized web app, the hello world of docker.

First create a new directory called “hello_docker” to contain our webserver. Then inside the hello_docker directory create a another directory creatively named “app.” Inside the app directory, create a file called hellodocker.py with the following contents.

from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
 return 'Hello World!\n'
if __name__ == '__main__':
 app.run(debug=True, host='0.0.0.0')

 

This web server will run off a docker container so lets create a dockerfile that will be used to create our docker image. In the hello_docker directory create a file called Dockerfile with the following contents.

FROM python:3.4
RUN pip install Flask==0.10.1
WORKDIR /app
COPY app /app
CMD ["python", "hellodocker.py"]

 

Dockerfiles contain a set of instructions for docker to create an image to specifications. The first line pulls the python 3 image as a base installation and installs flask, the next lines copy the code from our directory to the image on build and runs the server.

Next to build the sample app run in the terminal,

cd hello_docker
docker build -t hello_docker .
docker run -d -p 5000:5000 hello_docker

 

Docker run with flags -d -p run the app in the background and forward port 5000 in the container to port 5000 on the host. The command should output a hash confirming a successful execution.

Screen Shot 2018-02-11 at 6.41.17 PM

Our docker image is now built and running! To test the application run the following command verifying the message “Hello Docker!”:

curl $(docker-machine ip default):5000
Hello Docker!

 

This walkthrough barely scratched the surface of  some of dockers capabilities so if you are interested in experimenting with docker I’ve listed a few links to get started with.

https://aws.amazon.com/what-are-containers/

https://www.docker.com/

https://docs.docker.com/

https://docs.docker.com/registry/#alternatives

https://hub.docker.com/

Here are a few more helpful cmds.

#To see your all docker containers
docker ps -a

#to see your running docker containers
docker ps

#to see your docker images
docker images

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Python Operating System Calls

12/13/2017

Python’s lightweight dynamic interface is proven excellent for networking, data scraping and gui generating tasks. Python’s powerful and possibly overlooked os module enables you to take a dynamic approach to operating system programming. With python you can read or write to and from files across different areas on your hard drive and interface with the cmd line simply utilizing a few lines of code. This becomes useful in managing dependencies and project states.

First import sys and os

# lister.py
import sys, os

Then we will create a new method lets call it lister which will take an argument root to create our directory tree. The for loop will iterate through each directory containing files and os.walk() will generate a list of directories either top-down or bottom-up. This will print each directory to the command line console encapsulated by braces.

def lister(root):
....for (thisdir, subshere, fileshere) in os.walk(root):
........print('[' + thisdir + ']')

Each directory at its root yields a tuple containing three variables: dirpath, dirnames, filenames. These variables make up a tree-like data structure (where *  represents many).

                                                        Screen Shot 2017-12-13 at 2.06.33 PM.png

The nested for loop iterates through all the files contained by the directory.

........for fname in fileshere:
............path = os.path.join(thisdir, fname)
............print(path)

The path is collected and concatenated with the filename and then printed to the console.

if __name__ == '__main__':
....lister(sys.argv[1])

When lister.py is run the root directory must be called with the root directory to pass as an argument so sys knows where to begin the os walk.

Screen Shot 2017-12-13 at 2.13.26 PM

The resulting output might be similar to the stream below.

Screen Shot 2017-12-13 at 2.12.33 PM.png

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Python Web services, JSON, and ISS Oh My!

11/14/2017

In this post I will talk about how to handle JSON data from an external API utilizing python. Making calls to web services is made simple with python, with just a few lines of code you can track the International Space Station’s (ISS) position and time, realtime with a sleek graphical user interface. The following is a link to the project files download, https://codeclubprojects.org/en-GB/python/iss/.

The Turtle module is an object oriented graphics tool that draws to a canvas or screen. Turtle’s methods derived include forward(), backwards(), left() and right() like telling a turtle in what direction to draw. Turtle will draw over a NASA curated 2D map of Earth, so you should place the ‘map.jpg’ file in your project directory.

So one of the first things we need to do is instantiate a turtle screen with the following command.

# turtle provides a simple graphical interface to display data
# we need a screen to plot our space station position
import turtle
screen= turtle.Screen()

The image size is 720w by 360h so our turtle screen size should fit the image size.


# the image size is 720w x 360h
screen.setup(720,360)
# set coordinates to map longitude and latitude
screen.setworldcoordinates(-180,-90,180,90)
# set background picture to NASA world map, centered at 0
screen.bgpic('map.jpg')

 

iss

To represent the ISS on the 2D map let’s choose an image, it doesn’t have to be the following icon but it’s a nice icon so Houston we have liftoff!

# adds turtle object with name iss to list of objects
screen.register_shape('iss.png')
iss= turtle.Turtle()
iss.shape('iss.png')

 

Our location object will tell turtle to write the ISS png file to the screen at a specific position given the latitude and longitude of the ISS. Instantiate a Turtle() to create an object with the following code.

 

# location object for turtle to plot
location= turtle.Turtle()

# used later to write text
style=('Arial',6,'bold')
location.color('yellow')

Now, before we can tell our turtle to draw the ISS overhead-time we need the actual latitude and longitude coordinates of the passing ISS. A quick google search gives us the coordinates to store in a dictionary.

# Cape Canaveral ---> 28.392218, -80.607713
# Central Park, NYC ---> 40.782865, -73.965355
# create python dictionary to iterate and plot time of overhead location
coords={}
coords['nasa_fl']=(28.523397, -80.681874)
coords['centralp']=(40.782865, -73.965355)

To call the api we first need the url, ‘http://api.open-notify.org/astros.json,’ this will tell the api to give us the data we need to extrapolate the ISS data.

import urllib.request
import json
url='http://api.open-notify.org/astros.json'
response=urllib.request.urlopen(url)
result=json.loads(response.read())
print(result['people'])

Then to make the call to the url use urllib.request to access the url, querying for each given location. The data is then stored as a result,  loaded in json format. Json stands for JavaScript Object Notation and is used to conveniently organize data.

Screenshot (76)

The lines above are the contents of the json data, data is accessed similar to a python dictionary utilizing keys and indices.

import time

# setup loop to iterate and plot when the iss will be at the plotted location.
for k,v in coords.items():
 pass_url= 'http://api.open-notify.org/iss-pass.json'
 pass_url= pass_url+'?lat='+str(v[0])+'&lon='+str(v[1])
 pass_response= urllib.request.urlopen(pass_url)
 pass_result= json.loads(pass_response.read())
 over=pass_result['response'][1]['risetime']
# write turtle at new location coords
 location.penup()
 location.color('yellow')
 location.goto(v[1],v[0])
 location.write(time.ctime(over), font=style)
 location.pendown()

The above code block makes a call to the api, loads the json data, parses the overhead pass time (when the iss will be over the specified position) and then plots the time at the given location.

Screenshot (77)

# init current loc off iss coord
# make call to api
loc_url= 'http://api.open-notify.org/iss-now.json'
loc_response=urllib.request.urlopen(loc_url)
loc_result=json.loads(loc_response.read())</pre>
# the coords are pcked into jso, iss_position key
location= loc_result['iss_position']
lat= float(location['latitude'])
lon= float(location['longitude'])
<pre># set up while loop to plot moving iss
while(1):
# iss loc updates approx 3 sec
 time.sleep(1.5)

# update call to webservice to get new coords
 loc_url= 'http://api.open-notify.org/iss-now.json'
 loc_response=urllib.request.urlopen(loc_url)
 loc_result=json.loads(loc_response.read())
 location= loc_result['iss_position']
 lat= float(location['latitude'])
 lon= float(location['longitude'])
# write turtle at new location coords
 iss.setheading(90.0)
 iss.penup()
 iss.goto(lon,lat)
 iss.pendown()

 

The above code block makes a call to the api, loads the json data, parses the overhead position at the current geographic coordinates and plots the iss icon. The while loop is infinite to constantly track the iss.

Screenshot (75)

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

Pypack – compact packaging and reusable configuration

10-15-2017

In this post I will talk about how to use pypack to program clean and reusable python code. For programming larger complex applications in python, import statements tend to clutter code readability and isn’t practicable to reuse for different projects. Let’s say if you want to code a data science app you have your “go-to” packages like numpy, matplotlib, math etc. or a web crawler like selenium, beautiful soup, and requests with compact packaging and reusable configuration programming is streamlined.

First specify the packages used in your program in a configuration file named ‘config,’ defining imports and statements in key value declaration spaced by one line.

# config file
imports: 'math','json','collections','itertools','numpy','pandas','matplotlib.pyplot',''

statements: '','','','','np','pd','pp'

This will specify a list of imports pypack will pull into the dev environment necessary for your project.

# new.py
# packages from config file

import math
import json
import collections
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as pp

 

The above code snippet is the result of the configuration file contents listed at the beginning of the post.  pypack is a simple program written in python with less than 37 lines of code that reads the specified packages from the config file and writes those packages to a new python file for specialized coding projects.

import sys
# config file should be in same folder as pypack
# if not, specify
f=open('config','r')
s=f.read()

First pypack opens the config file and reads the contents to memory.

# parse config file
s1=s.split('imports:')
s2=''.join(s1)
s3=s2.split('statements:')
s4=''.join(s3)
arr= s4.split(',')

 

Python syntax is such that assigning elements is as simple as encapsulating a loop with brackets. The first four lines of this snippet comma delimit the the config file and assign imports and statement elements to separate arrays.

# list comprehension of imports and statements
arr=[a for a in arr[:7]]
st=arr[-1].split('\n\n')[0]
arr[-1]=st
arr1= s4.split(',')[7:]
arr1.insert(0,' ')

Next imports and statements lines are split, concatenated and then double space delimited to an array for list comprehension.

.py =open(sys.argv[1],'w')
for i in range(len(arr)):
   if arr1[i]==' ':
       .py.write('import '+arr[i]+'\n')
   if arr1[i]!=' ':
       .py.write('import '+arr[i]+' as '+arr1[i]+'\n')
.py.close()

Finally pypack opens a new writable python file and effectively iterates through the two arrays, writing imports and statements to the new python file.

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

 

Object Oriented Python Programming

9-24-2017

In python object oriented programming is a simple way to build powerful applications. Consider a real-world object like a pair of shorts. This pair of shorts has a set of attributes and properties to make that pair of shorts unique. For example this pair of shorts might have pockets, buttons and zippers to put on and take off the shorts. Essentially, we have a blueprint to make any pair of shorts (give or take a few unique properties), this is known as a class and is the fundamental concept of object oriented programming and design. Each class defines attributes and methods instantiated by objects. Let’s take a look at some example code of out shorts class.

class shorts:
    def __init__(self,waist,length,color):
        self.waist=waist
        self.length=length
        self.color=color
        self.wearing=False

In python each class has an __init__ constructor to define unique parameters for each object. Self is the reference to the object at reference and initializes attributes unique to that class. Above, the constructor class takes parameters self, waist length and color and initializes those values as arguments utilized later in the program. Put on and take off methods pass those arguments by reference and updates self.wearing to false to let us know the shorts are off.

    def put_on(self):
        print("Putting on {}x{} {} shorts".format(self.waist,self.length,self.color))
        self.wearing=True

    def take_off(self):
       print("Taking off {}x{} {} shorts".format(self.waist,self.length,self.color))

 

The code above defines methods to handle attributes of the shorts object ie. self.waist, self.length, self.color and self.wearing. When executed the passed attributes are printed to the console. The code below shows the class, instantiated as an object calling the defined methods.

new_shorts= shorts(32,33,"blue")
new_shorts.put_on()
new_shorts.take_off()

 

Screenshot (6)

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

 

 

 

 

VR Development – BriteLites

8/20/2017

The last post about VR technology I wrote with really no preface as to why write about VR other than I wanted to so this post serves (I should hope) as a preface as to why VR. VR Technology is not groundbreaking, its been around for years along with the buckets of scifi tropes giving VR the center stage, so what makes VR exciting?

Sensorama-morton-heilig-virtual-reality-headset

There are the Oculus rift, the HTC Vive, PlayStation VR headsets positional tracking and specs to boast and each come with there own set of accessories these however require an expensive high end host PC. But these days everyone with access to a super computer in their pocket has the option of buying one of the mobile headsets to begin their own VR experience.

With mobile headsets like the Google Daydream, Samsung Gear, and flavors of Google Cardboard VR is an affordable option for anyone and everyone to develop and or consume VR content. Unity Game Engine, Unreal game engine to name a few support application development integrating the hardware sdk libraries and a great wealth of developer tools to quickly get develop a VR app.

 

P_setting_fff_1_90_end_600.png.jpggalaxy-s6-topic.png

Then the next question is what makes a great VR experience? I have the Samsung Gear with out any of the peripheral accessories, so I brainstormed simplicity. How can I make a VR experience enjoyable using only interaction supported by the headset, ie motion and touchpad? I reflected upon my early childhood playing litebrite with my friends and how that was such a fun experience and thought that would port to a great VR experience, and have started prototyping.

 

I decided the simpler the experience would immerse and ultimately give the user an intuitive sense of presence. The controls utilize user head movements to explore a world of spherical lites and the touchpad to select different colored lites and clear lites. BriteLites will be available to download through the oculus store for free in the near distant future.

 

briteLites.png

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

 

Topic Discovery in python!

07/23/2017

So I still haven’t figured out if I want to make one blog post a week or make more than one post a week but I will try to effectively post at least once a week on topics in computer science. We’ll see where it goes, it will be very exciting and most certainly worth the click.

This week I plan on exploring a data set of over 5,000 film entries scraped from imdb in an effort to briefly discuss machine learning, particularly Latent Dirichlet Allocation. I will not go into any of the theory because that is beyond the scope of this blog, these aren’t the droids you’re looking for.

However, nltk and gensim provide extensive apis that enable processing human language. Anything from stemming down to root words and or tokenizing a document to perform further analysis it is made easy with the above modules.

 


import pandas as pd
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
import numpy as np
import matplotlib.pyplot as pp
import re

 

Let’s start by reading in the csv file, movie_metadata.csv. A link to the kaggle download is commented in the code below.

 


## https://www.kaggle.com/deepmatrix/imdb-5000-movie-dataset
movie=pd.read_csv("movie_metadata.csv")
movie.head()

Screen Shot 2017-07-23 at 4.27.21 PM

 

 Latent Dirichlet Allocation is used to estimate word topic assignments and the frequency of those assignments for a fixed number of words called documents. Let’s assume each document exhibits multiple topics. So we will be looking at columns plot_keywords and genres.

 


movie['plot_keywords']

 

Next let’s remove the pipe with some list comprehension and check if successful.

 


keyword_strings=[str(d).replace("|"," ") for d in movie['plot_keywords']]
keyword_strings[1]

Screen Shot 2017-07-23 at 4.27.29 PM

Good!

 

Stemming reduces words down to their root word and is particularly useful in developing insightful NLP models.

 

docs=[d for d in keyword_strings if d.count(' ')==5]
len(docs)
texts=[]

#create english stop words list
en_stop= get_stop_words('en')

# create p_stemmer of class PorterStemmer
# stemmer reduces words in a topic to its root word
p_stemmer= PorterStemmer()

# init regex tokenizer
tokenizer= RegexpTokenizer(r'\w+')

# for each document clean and tokenize document string,
# remove stop words from tokens, stem tokens and add to list
for i in docs:
  raw=i.lower()
  tokens=tokenizer.tokenize(raw)
  stopped_tokens=[i for i in tokens if not i in en_stop]
  stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
  texts.append(stemmed_tokens)

 

The next block of code transforms the granular data into sets of identifiable tokens to manipulate later. To do so, let’s create a dictionary for the terms and value and a matrix for each document and term relationship.

 

# turn our tokenized docs into a key value dict
dictionary= corpora.Dictionary(texts)
# convert tokenized docs into a doc matrix
corpus=[dictionary.doc2bow(text) for text in texts]

 

The immediate next line of code generates the Latent Dirichlet Allocation model taking the corpus, the number of topics and the number of training iterations. Printing the model we see there is an estimate of observed words assigned to each topic, effectively (or ineffectively) predicted.

 

ldamodel=gensim.models.ldamodel.LdaModel(corpus,num_topics=2,id2word=dictionary,passes=20)
print(ldamodel.print_topics(num_topics=2,num_words=5))

screen-shot-2017-07-23-at-4-30-46-pm.png

Let’s parse this data into something we can handle. We will also combine both topics into one array to get a nice plot and then plot the data.

 

top=ldamodel.print_topics(num_topics=2,num_words=5)
topic_num=[]
topic_str=[]
topic_freq=[]

for a in top:
  topic_num.append(a[0])
  topic_str.append(" ".join(re.findall(r'"([^"]*)"',a[1])))
  w0,w1,w2,w3,w4=map(float, re.findall(r'[+-]?[0-9.]+', a[1]))
  tup=(w0,w1,w2,w3,w4)
  topic_freq.append(tup)

words0=topic_str[0].split(" ")
words1=topic_str[1].split(" ")
words=words0+words1

worddict0=dict(zip(words0,topic_freq[0]))
worddict1=dict(zip(words1,topic_freq[1]))

sorted_list0 = [(k,v) for v,k in sorted([(v,k) for k,v in worddict0.items()])]
sorted_list1 = [(k,v) for v,k in sorted([(v,k) for k,v in worddict1.items()])]y_pos = np.arange(5)

freqs=[a[1] for a in sorted_list0]
ws=[a[0] for a in sorted_list0]
freqs1=[a[1] for a in sorted_list1]
ws1=[a[0] for a in sorted_list1]

pp.bar(y_pos, freqs, align='center', alpha=0.5, color=['coral'])
pp.xticks(y_pos, ws)
pp.ylabel('word contributions')
pp.title('Predicted Topic 0 from IMDB Plot Keywords')
pp.show()

pp.bar(y_pos, freqs1, align='center', alpha=0.5, color=['coral'])
pp.xticks(y_pos, ws1)
pp.ylabel('word contributions')
pp.title('Predicted Topic 1 from IMDB Plot Keywords')
pp.show()</pre>
<pre>

i1

i2

This process then can be repeated for any genre of film in the imdb data set.

If you like these blog posts or want to comment and or share something do so below and follow py-guy!

First blog post ~ python packages

07-14-2017

Welcome to py-guy! py-guy blog explores science, culture and technology with simple examples and thoughtful discussions. For the first post I will talk about why python is a useful programming language and some nifty things python can do while exploring the MOMA data set. The Museum of Modern Art collection is an excellent data set containing title, artist, date, medium etc. of every artwork in the Museum of Modern Art and is perfect for the scope of this post. To download the data set and run your own analysis I’ve listed the link below.

https://www.kaggle.com/momanyc/museum-collection

Python seamlessly enables all stages of data manipulation and utilizing matplotlib, numpy, and pandas packages streamlines the process of intuitive data analysis. At first I felt cheated that I could just import a package to run all the calculations without knowing any of what is going on under the covers but after my first few modules I can say these packages are powerful components in the py-guy toolbox.

import math, json, collections, itertools
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as pp

arts=pd.read_csv("artworks.csv",names=['id','title','artist-id','name','date','medium','dimensions','aquisition-date','credit','catalogue','department','classification','object-number','diameter','circumference','height', 'length', 'width', 'depth', 'weight', 'duration'],dtype='str')
arts.head()

With pandas there is a sort method you can call on any data frame to sort in ascending or descending order. Pandas enhances numpy by including data labels with descriptive indices, robust handling of common data formats and missing data, and relational databases operations.


df=pd.DataFrame(arts)
df.sort_values('date')
df['date']=pd.to_numeric(df['date'], errors='coerce')
df.sort_values('date')

romanticism= df[(df['date']>=1790) & (df['date']<=1880)]
modern= df[(df['date']>=1860) & (df['date']<=1945)]
contemporary= df[(df['date']>=1946) & (df['date']<=2017)]

df1=romanticism.sort_values('date')
df1[-5:] # check if successful

Then using matplotlib set a histogram for dates, setting the bins to the range of art periods to plot a histogram of the given data set.

 

# list comprehension to pull only dates of type float from df
dat=[d for d in df['date'] if np.isnan(d)==False]

# set plot
pp.hist(dat,bins=range(1790,2017))
pp.ylabel('Number of Artworks')
pp.xlabel('Year')
pp.title('Artworks per Year')

artworksPerYear

Python language is expressive in its readability and simplicity.  In only a few lines of code you can read, manipulate and plot data.

 

# according to wikipedia art periods are defined by the
# development of the work of an artist, groups of artists or art movement
# Romanticism -1790 - 1880
# Modern art - 1860 - 1945
# Contemporary art - 1946–present

periods = ('Romanticism','Modern','Contemporary')
y_pos = np.arange(3)
arts = [romanticism.size,modern.size,contemporary.size]

pp.bar(y_pos, arts, align='center', alpha=0.5, color=['coral','yellow','teal'])
pp.xticks(y_pos, periods)
pp.ylabel('Artworks')
pp.title('Pieces per Movement')

pp.show()

piecesPerMvt

Using collections and list comprehensions is just another powerful component python has to offer. I will make another blog post on python collections and list comprehensions but for now here is a quick example illustrating their utility.


# make a list comprehension
nam=[n for n in df['name']]

# using the from collections import Counter
name_art=Counter(nam)
# above line is equivalent to collections.Counter(nam)

# sort the collection by most artworks
mc=name_art.most_common(10)

artists=[artist[0] for artist in mc]
common_arts=[arts[1] for arts in mc]

Let’s try a horizontal bar chart with ‘barh.’

y_pos = np.arange(len(common_arts))
pp.figure(figsize=(10, 3))
pp.barh(y_pos, common_arts, align='center', alpha=0.5)
pp.yticks(y_pos, artists)
pp.xlabel('Number of Artworks')
pp.title('Top 10 Artists with most pieces in Moma')
pp.show()

 

topTenArtistPieces

Similarly this process can be repeated for different variables and scopes returning some interesting results.

arts=pd.read_csv("artworks.csv",names=['id','title','artist-id','name','date','medium','dimensions','aquisition-date','credit','catalogue','department','classification','object-number','diameter','circumference','height', 'length', 'width', 'depth', 'weight', 'duration'],dtype='str')
df=pd.DataFrame(arts)

cls=[c for c in df['classification']]
cls_count=collections.Counter(cls)

clsCol=cls_count.most_common()
clsArr= [c[0] for c in clsCol]
numCls=[c[1] for c in clsCol]
y_pos = np.arange(len(clsArr))

pp.figure(figsize=(10, 20))
pp.barh(y_pos, numCls, align='center', alpha=0.5)
pp.yticks(y_pos,clsArr)
pp.ylabel('Classification')
pp.xlabel('Number of Artworks')
pp.title('Classication of Artworks')
pp.show()

medium