Geekery

Will forever be interested in a good espresso, conversation, and meeting new people.

April 17, 2014 at 8:19pm
0 notes

Context switching as easy as cd →

Of course, there is no way that context switching can be as easy as issuing one of the most basic shell commands. But perhaps it can get closer?

I made a dumb error today when I was going between directories of different projects and ended up breaking our chat bot. I thought “wouldn’t it be nice to be able to leave yourself little sticky notes on your directories?”

I quickly whipped up context-switcher and I’m going to be testing it out for the next few weeks to see if I utilize it and if it helps, hinders or provides no value at all.

If you give it a whirl, let me know what you think.

March 26, 2014 at 9:58pm
2 notes

Mapping FCC Data

A really quick GIS tutorial I made about mapping some FCC data in Hamilton County, Tennessee. You can follow along with the written instructions here:

http://go.jeremiak.com/map-fcc-doc

January 29, 2014 at 9:45pm
1 note

Web scraping with a single line of code

Last Fall, the US Federal government shut down because of our elected representatives’ inability to compromise. The general sentiment seemed to be that daily life continued on uninterrupted, and to the more cynical the whole event illustrated the government’s lack of relevance. I made an impassioned, though perhaps quixotic, plea to consider the larger impact on things like our national scientific research, important social programs, and the most sexy of all: the national debt.

I needed a way to illustrate the impact this political stalemate and the measure that made the most sense was the interest rate on US debt. In other words, I was concerned with the recent change in the price that investors were willing to pay for Treasury debt, which funds the operations of the Federal Government.  The particular measure that I used was the price of various debt offerings, but I focused in particular on the 1 month issue called a “T-bill”. The Treasury department publishes these prices in a nice table that looks like this:


image

(Source: http://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yieldYear&year=2013)


While I appreciate the Treasury Department releasing these numbers in such an easy-to-read manner, I really wanted to be able to play with them in a spreadsheet and graph them in different ways. To get all the data straight off the page, I could either copy and paste the data off or scrape it. Of course, some civil servant put in the effort to get all this data into the table, and I had no desire to reverse engineer that effort. After all, I optimize for laziness.

Web scraping is a simple idea: pull some data off of some place on the internet. Generally, scraping is practiced by programmatically crawling a web site and parsing it. I only had a few minutes so I decided to try out a Google Docs trick I had recently heard about: ImportHTML().

Back at the Treasury’s site, you can see there is a nice, big table element in the middle of the page with all of the desired data.  I made a new spreadsheet and in cell A1 I set the value of the cell equal to:

=ImportHTML(“http://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yieldYear&year=2013”, “table”, 66)

You can see that I’m essentially invoking the ImportHTML function and passing in three parameters, in a particular order. 

  1. The first parameter is the URL of the web page to scrape. In this case its the URL of our nice Treasury table at http://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yieldYear&year=2013
  2. The HTML element tag that you’d ultimately like to pull out. In this case that means we’ll want to put “table” as the second parameter since the data is contained within a <table> element on the Treasury page. You can easily find out the element type by using something akin to Chrome Developer Tools but that’s beyond the scope of this post
  3. The third parameter is the most confusing by far. The best way to think of it is the following: Take all of the “table” elements on the page and put them into a list that is ordered by where they appear in the source code, and then count down the list (starting at 0) unit you get to the item you want to scrape. This can be somewhat tricky, as you’ll notice that in my function I supplied the value 66. This means that the table I wanted was the 67th table on the page as ordered within the source code. Now, that page does not look like it has 60+ tables on the page, so it took me a few minutes to get the number correct and might take some trial and error.

Once you supply the function, you’ll find that the tabular data is pulled into your Google Spreadsheet and represented correctly in columns and rows. All I did next was make a second sheet that was simply a line plot of all the data pulled from the Treasury’s web site. In fact, the chart just looked like this:

image

(Feel free to check out both the spreadsheet and chart, right here: https://docs.google.com/spreadsheet/ccc?key=0AteUjArWq80LdDgzM1NEbzlHSHY1U3hSczdPV3VMMFE&usp=drive_web#gid=2)

The last piece I needed was annotations so that I could actually demarcate where the shutdown was and some relevant preceding events. Fortunately, this is also surprisingly easy in Google Docs as you just need a column at the end of the data set with no header value. In the chart above, the letter “H” is where the shutdown began and “J” is where it ended. That massive spike in the blue line that was previously following the bottom of the chart is the increased immediate cost of borrowing during the shutdown.


Turns out that the cost of short term Treasury debt is not a really compelling story for my friends and many continued to opine that the government shutdown was indistinguishable from life before it. But at least I have a pretty chart now, and to create it only took three minutes and one line of code.

August 28, 2013 at 2:37pm
0 notes

jeremiak/OAuth v1 Request Signing →

I really enjoy OAuth 2.0. I think its clean, simple and easy  to use in a multitude of use cases.

However, I was trying to better understand how OAuth 1 signed requests. I wanted to make sure I understood exactly how the signature was generated so I wrote a quick script that does the same thing and makes authed requests against Twitter.

February 17, 2013 at 6:49am
0 notes

Only way to change is to… change

I found myself continually visiting The Huffington Post many times a day, despite what I consider to be almost sheer sensationalist journalism.

And I just found myself doing it again, one too many times. I’ve been thinking a bit recently about how to design behavior, particularly bad or unwanted behavior and I’ve come to the conclusion that it must be built into the constraints we apply to ourselves or users.

Thus, the following now resides in my /etc/hosts file:

127.0.0.1 huffingtonpost.com

127.0.0.1 www.huffingtonpost.com

February 7, 2013 at 1:22am
0 notes

Just how valid is that TOS anyway?

I’m finally getting around to churning through some of my saved Pocket posts and came across a TC post from late December titled “Instagram Hit With Class Action Lawsuit Related To Last Week’s Change Of Service Terms” and it got me thinking.

Of course everybody has thought to themselves “who even reads these things” as we check “I agree” to any of the multitudes of services we use in a given day.

AND, if nobody reads them does that mean we’re signing our collective digital personas away to be monetized through personal ads or affiliate revenue?

Likely. Though I’m not so sure that behavior by mainstream digital services will be looked so favorably upon by the American judicial system. Actually, I have no real way of knowing that either but this class action lawsuit against Instagram on behalf of its users will definitely set the industry tone going forward on how binding a TOS is as well as how one must articulate and maintain those terms.

February 5, 2013 at 12:43am
0 notes

A veritable natural law in social media is that to get to a system that is large and good, it is far better to start with a system that is small and good and work on making it bigger than to start with a system that is large and mediocre and working on making it better.

— @cshirky in Cognitive Suprlus

September 20, 2012 at 3:25pm
0 notes

A great visualization of how Stuxnet worked, and arguably how the next generation of weapons will be modeled.

It also exposes a serious vulnerability in the way we approach foreign policy and domestic security. Of course, run-of-the-mill attacks such as DDoS continue to appear with an ever greater frequency, such as yesterday’s news of attacks on some US banks, but what happens when the core infrastructure becomes a target?

(Source: vimeo.com)

September 18, 2012 at 5:38pm
2 notes

Following up on my post about cyborgs from last year, a great piece on the sub culture of those who wish to mix human and machine at an ever increasing pace.

(Source: Yahoo!)

March 21, 2012 at 12:59am
0 notes

Way too cool, DaVinci would be proud

November 10, 2011 at 1:48am
0 notes

Apple as a Single-sign-on service

It’s curious to me that Apple is not yet a single sign-on provider and it makes me question if the walled garden of Apples ecosystem will ever extend to third party sites.

I really started to consider this question after I got an iPad and started to realize how much data is accessible through the Apple ID, particularly with iCloud and iTunes Music Match. It is likely, especially with email, contact, and calendar data in iCloud, that a third party application might get great value out of offering Apple as a single sign on partner. It is also likely that Apple would be able to collect a great deal of information that could help many of their consumer facing data plays more effective by renting out this new device-graph.

When you consider what the layer of information stored in Apple isn’t really a social network (besides failed attempts like Ping), but more of a representation of a persons entire life. You have apps (great representations of interests, for instance there are very few reasons a person will download a MotoX app), tunes and digital media (great representations of preferences), and Calendar and email data (great representations of behavior). I can only imagine the potency when combined with social sources like Twitter and Facebook.

Then again, there is real reason to doubt if Apple will be interested in collecting more data through third party developers. After all, they’ve maintained an ever more vaulted wall around their garden since the end of the clone wars.

It will be interesting to see what happens. As the other Silicon Valley giants race to liberate (or appear to liberate) the massive amounts of user data, Apple predictably moves in the other direction.

October 18, 2011 at 2:24am
38 notes

Developer Middle Class Squeezing

Hark back to the year 2008; do you remember what was different then? Sure, the world economy was collapsing, we were in a heated political battle between McCain and Obama; things were uncertain, including something we take for granted today: the penetration of mobile.

I’m sure we’ve all seen Mary Meeker’s chart of the growth of mobile versus other internet-based technologies, so no need to rehash here. However, I remember starting on an iPhone app in 2008, and I remember my dad asking me if I thought that the iPhone would take off. Today, that seems like the most ridiculous question. But in 2008, we didn’t have any handsets that weren’t under the direct control of the carriers.

Fast forward three short years, and apps have become a sweet spot in this dismal economy. Mobile ad spends are approaching multiple billions of dollars per year; it seems like every day a new mobile startup is starting, growing, and being bought by some major player. Meanwhile, a whole class of mobile developers has ridden the wave up.

Now I’m not talking about the Zynga, Rovio, or the Instagram crews. I’m not talking about the terrible apps that are never downloaded more than once (probably on the developer’s mom’s phone). I’m referring to the middle class of mobile developers. The developers with decent skills and decent ideas, but who can not cut through all the noise to make enough revenue off their apps full-time.

Its this group of developers that will likely be squeezed the most. Its companies that serve this cohort of developers with easy-to-use tools and provide end results with dollars attached that will thrive.

 

September 29, 2011 at 6:36pm
40 notes

The Suits and their suits

This mobile patent war is getting ridiculous. Its more dramatic than All My Children. I guess it took them 40 something years to get rid of that nuisance as well. Damn, 40 years of this crazy IP WMD business, and we’ll all be fucked.

Major props to those only on the receiving end, particularly Google. They didn’t have many patents, because they didn’t want to have to play this game. So much for that, I suppose…

Mobile Patent web

September 6, 2011 at 4:28pm
5 notes
Went to go see the two Andy Goldsworthy pieces in the Presidio this weekend. The &#8220;Wood Line&#8221; was pretty awesome, and very Goldsworthy-esque.

Went to go see the two Andy Goldsworthy pieces in the Presidio this weekend. The “Wood Line” was pretty awesome, and very Goldsworthy-esque.

July 29, 2011 at 10:00am
16 notes

Valuing Games with Friends

Since I’m on a Facebook tear right now, along with the rest of the world I suppose, it would be prudent to lay down some of my thoughts regarding the newest it-thing in the Facebook ecosystem: Zynga.

The game company’s recently filed S-1 reveals how important the social graph (and the largest creator of it) is to Zynga. The S-1 filing mentioned Facebook over 200 times (source). This may not be surprising to those who understand Zynga to be the creator of FarmVille and MafiaWars (two very popular Facebook games). But Zynga has also branched out from just the Facebook platform, producing the ever-popular Words with Friends mobile application. Yet, the bulk of the company’s future success remains correlated with Facebook.

But it is not an unrequited relationship: it was recently noted that almost 10% of Facebook’s revenues came from the fees paid by Zynga. This seems to also be born out by the speculations in financial markets, valuing Facebook at $100 billion and Zynga at approximately $10 billion. To have 10% of revenue come from a single company seems to be somewhat risky.

With regards to Zynga, I’m still having trouble accepting that they got millions of people to spend real dollars on digital cows and corn. But it is clear that they’re really onto something and it will be really exciting to watch it’s trek through the IPO and onto the markets.