21 June 2011

Software Stabilization for Video!


Via Google Research:

Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake... Our goal was to devise a completely automatic method for converting casual shaky footage into more pleasant and professional looking videos.

Eschewing the promo video, I decided to try and run the technique on a video I'd shot from a moving golf cart, that was moving, while climbing a hill, after I'd had a beverage or few. I've added the software stabilized, and the original videos below. Really quite incredible:

Video Stabilized:


Original:

14 June 2011

Tim Bray on the Android Ecosystem


Went to a presentation by Tim Bray. Here are my notes:
Indication of the state of the Economy:
Who's looking to hire people? Who's looking for a job? Stand up and talk to each other afterward.
Explains his role: I'm an advocate, not an evangelist. Tell me about your experiences so I can take them back to the product group.
  • More than 4BB mobile phones in the world today.
  • Only 1BB PCs
  • Only 5 years to get to 225MM users for iOS
Presents a bunch of various choices for presentations and asks for the group to hum based on desire to hear them. That was really funny. Someone really wanted to hear about native versus web app based applications.
Vast majority of developers don't seem to be making any money. In general, here are the various ways to make money:
  • App sales
  • App upgrades (Oracle is great at this)
  • In-app advertising (this is a substantial driver of revenue for a lot of developers, banging the Google drum)
  • In-app sales (this is a big deal)
  • Subscriptions (trip-it for example, leverages server side platform. 37-signals)
Who are you selling to?
The mobile industry wants you to think you're selling to a young urban hipster. Too many mobile apps are aimed at solving first-world problems. This is not just not-smart, but immoral. Note that the population in the third world for mobile phones is exploding. Don't scope your demographic too narrowly to just the US.
Who's buying and installing apps: US is the biggest, next up Japan, next up Korea, next Germany and Britain.
Multiple APK support: Can now provide multiple APKs that target different segments. Question from audience: Can you ship an app per carrier? Not sure.
Fixing the insane app count:
Two hundred thousand apps and counting. Featured apps get a 25x to 50x spike in downloads. Adding in badges: Editor's Choice, Top Developer, Top Grossing etc. Should help distinguish apps.
Question from audience: black-listing? Not a bad-idea.
Uninstalls for apps are very high value signals when it comes to rating an app.
Frustrated question from audience: Why so many ways to rate things? No answer.
Direct Carrier Billing is 50% of revenue: "put it on my phone bill". Transparent to the developer. Didn't take with T-Mobile, but now going nuts in Asia. Every carrier wants this, but it won't happen quickly (two problems):
  1. Carriers have billing from the 1950s, so its a fierce engineering challenge
  2. When you do carrier billing, they show up with 3 engineers and 11 lawyers
Some complaints from audience about FUD around DCB and latency to app deployment:
Google's core competence never really included communication. But we're talking about things.
What's coming in 3.0 and 3.1 (Ice cream sandwich.. no 'J' yet)
  • new 'Holo' theme
  • the Palm guy is the UI Czar
  • Fragments: widgets that have a lifecycle within an activity. Helps during rotation.
  • Really slick notification interface (ribbing at Apple)
  • Menu bar is always on, but you can put it in "lights out mode", which blacks it out
  • New Action Bar on the top: like a menu bar on a pc app, and its contextual based on your app
  • Renderscript: C like syntax that will exec on the GPU, and runs on LLVM under the covers
  • Much better animation
  • HTTP Live Streaming (data rate sensitive with backoff)
Web vs Native: Shows the TripIt app (which implements native, mobile website and full website) through a set of phases.Not obvious. Choices for why native versus web typically hinge on "I know Java (for native)", vs "I know HTML and Javascript (for web)".

04 June 2011

Visualizing ten months of work in under two minutes


Since wordsinmedia.com was written using Subversion as its version control system, I was able to run the wonderful gource visualizer on the version control logs. Here's the end result:


A view from the Oyster Dome

Mid way to the top of the Oyster Dome last weekend with the Ogden's. Four images stitched with Hugin. Click image to expand to a much higher resolution PNG.



24 April 2011

Masters in CS: worth it after professional experience?

I'll be defending my Master's thesis on May 6th, and if all goes well I'll be graduating with full pomp and circumstance a few weeks thereafter. Hoping that these notes on going back to school to get a Master's in CS after working for many years may prove useful to someone else, I figured it should be captured before I (hopefully, fingers crossed) graduate.

Quick Context
I'd been working for an organization that had treated me well for about six years. Things were good- I'd started out as a developer, almost fresh out of college. Over the years I got various other responsibilities. Eventually, I landed up leading a team that built and maintained a lot of software integral to the organization. This was 7 years after finishing my bachelor's degree.

Making weather stations
Learning on my own
I used to busy myself with various projects outside of work. I'd build weather stations (which got hit by lightning sporadically), write my own Android apps, try and build strange home automation tools, and sometimes try to instrument my pets (never really worked).

Learning on the job
Put delicately, work had stopped challenging me as much as I would have liked it to. Problems existed that needed to be solved, but they seemed to follow similar patterns. The ones that scared me had been solved, or weren't solutions the organization needed to invest in anymore.

Between work and my own tinkering, I just didn't have the commitment to dig in and try and learn a whole bunch of things in a thorough way. I'd read up a lot, but there wasn't any accountability to really understand things in a way that was truly beneficial. Most of my reading/tinkering made me feel like I was a spectator to what was going on, where I really wanted to be a part of it.

Finding an institution
Seeing that most formal education offers a grounding in a wide array of topics, I felt that a Master's would be a good idea. I started to scour websites of various colleges and universities for information and details on admissions. My undergrad performance left a lot to be desired; my second year's grades read like output from a loop over Random.nextDouble(). But I hoped that my professional experience might offset that disaster. I started to contact various admissions departments for more information on programs, and for queries on admissions requirements. This proved to be really frustrating.

Hand it over.

Tangential vent: who's the customer?
Someone who wants to go to a higher learning establishment is (reasonably speaking) in economic terms: a client. They pay the institution for a service: education. Yet most non-academic (ie: admissions, payments, records etc) departments at every university/college I dealt with treated me horribly- no returned calls, no real information, and never any emails with substance (if replied to at all). This was very disheartening. You're supposed to make a commitment of many tens of thousands of dollars* over two years to people you can't get straightforward and thorough details from about what their program has to offer?!


The Academic Departments know what's up
The best way to learn more about a program, to understand how it functions, and to get details on admissions etc. is to contact the Chairperson for the CS department (CC their assistant if they have one too). In general, they're very pragmatic- and will make considerations if you weren't a rockstar during undergrad but have had good working experience. Also, they're going to be the ones to eventually decide whether they want you in their program, so getting a relationship with them up front can't hurt. 


Distance learning?
Since my local options had run dry, or were just painful to deal with, I decided to expand my search to universities that provided distance learning. Not surprisingly, any college that provides distance learning also does a pretty good job of communicating with potential applicants via email**. I landed up getting admitted into Hofstra after learning more about their CS program.

Not easy, but rewarding
About 90% of my courses were really good. In general, the amount gained from a course was proportional to the amount invested by the professor. Surprisingly, I think that all parties (educators and students) have to put in a lot more effort in a distance learning setting than they would in a traditional classroom setting. Consider that your average course taught over distance learning requires that the professor create a video, slides, notes and provide references to various supplementary materials as part of a single lecture. Instead of office hours, you have discussion boards where everyone participates and sometimes the professor throws down a specific discussion point.

The net result is that you have a much higher bandwidth with your professor- you're in email contact with them often. Frequently, professors will provide their IM handle so you can IM them anytime too. You get to watch the lectures on your time: evenings, weekends or whatever works.

Homework on the road
I landed up spending weekend after weekend on school for about two years. Most evenings in the week had me hunched over my laptop listening to lectures or answering questions. It isn't easy, but distance learning makes it possible if you have a job that's demanding. You have to develop the discipline to context switch completely though. I kept my "school" laptop on me at all times so that I could power up and watch a lecture or answer a homework assignment if I had any free time***.

Was it worth it?
Yes. Its been a really rewarding process. I've had a bunch of classes of which my favorites were:
  • Algorithm design and analysis: going all the way down to fundamentals and approaching each data structure from an implementation standpoint and analyzing them for every possible operation. Order of magnitude is my best friend.
  • Programming Language Concepts: All I'd had any real experience with were imperative languages, so this was a real mind expander. It exposed me to the fundamentals of functional languages from a theoretical standpoint and then I got to play with them too. (I had to write code in ML and see the most sensible compiler output ever). This class alone made the whole gig worth it. 
  • Security: which makes a lot more sense after you've had to deal with it in a corporate setting. I got to understand asymmetric key generation, encryption, and decryption by hand (and wolframalpha.com). Not to mention getting familiar with Kerberos and all sorts of other network based authentication systems that make you chuckle when you get a sales pitch about SSO from a vendor. 
  • Operating Systems: Based on Andrew Tanenbaum's Modern Operating Systems. Deep dives into processes and threads, memory (everything you ever wanted to know about managing memory, algorithms to do so, and tradeoffs), and file system design.
  • Databases: Really understanding how they're built. This proved immensely valuable at work- I took all the theory and was able to apply it back in practice. 
  • Advanced Data Structures: Implement every data structure possible with a different one. Get familiar enough with Huffman to create tables and then compress boring documents during a meeting for fun, on paper. 
Most of these taught me things I was able to put to use immediately at work, or in my own tinkering. In many cases, I felt like the framework provided gave me a much better footing for theoretical and design discussions.

Timing
I'm glad I waited to do my Master's after I had a bunch of work experience behind me. Encountering problems in real life (and sometimes solving them in a rube-goldberg fashion) allowed me to gain a lot more from school than I would have had I not encountered the problems before hand. I think you get a lot more 'ah-ha!' moments..

Consequences
My primary rationale for going to school was because I felt like I needed to be challenged with problems that I hadn't encountered before. This I got in spades.. And then then there was the character building from the volumes of homework, assignments, papers and such. But it also became glaringly apparent that this would be a finite engagement. Eventually you graduate (you hope!). I wanted to continue this education/masochism. To some degree then, this whole experience helped me realize that I needed to find a different job: where I would have to face a whole bunch of problems that would make me nervous and would have to learn a lot on the fly. And so, I started job hunting. But that's a different story..

And now, enough procrastinating and back to wrapping up my thesis so I can try and graduate...


Footnotes:
* Be prepared to invest between $20K to $40K for a good good distance learning program. Plan on getting a laptop that you'll use for the entire program too.
** Drexel and Hofstra were two that did particularly well.
*** The amount of energy you have to put in is a lot. Plan on getting hit for no less than 15 to 20 hours a week. Recovering once you fall behind is really hard, since the homework and discussion board posts all start to add up and create a huge backlog. 

03 April 2011

Joint != Separate Combined

Every year, I walk through an online interview to fill out our taxes and this question never fails to amaze me.

27 March 2011

More juice

Recently, I discovered a problem with wordsinmedia.com.
Week 11 was not fun.
There are three main parts to this system:
  • a database that stores stuff
  • a set of perl programs that acquire and process the news and store them in the database
  • and, a website that sits on top of the database whose backend executes within Jetty
A recent change I made that increased the number of news sources that were being polled and analyzed, caused a significant spike in resource utilization.

The additional processing on the perl side is CPU intensive, and with more news to process more cpu was being burned. With more data from the perl side, the MySQL instance had changed its growth rate: queries that dealt with hundreds of rows earlier were now dealing with tens of thousands, causing an increased load on the database. Collectively, everything had added up nicely to swamp the whole system leaving the any queries from the website to become dog slow- rendering the website in a very non-responsive state. And yes, it all lives together- this was nothing more than an experiment that grew incrementally, so...

All of my hardware is virtually provisioned, and lives within a cloud. I'm biased toward a specific one, but anyway...

As a first step, I figured I should isolate the various parts to see if that helps things along- there was just too much CPU being contended on to adequately isolate components to make a deterministic call on what was going on. I figured I'd separate the perl processing from the database/web server first. Fairly simple to do:
Need more power?

Provision a new node
Extraordinarily easy, and in many cases, free if you want a small amount of horsepower. Get an OS booted up on it and call it good.

Addressing
Since there's going to be node to node addressing for the perl programs to talk to the database node, you need a way to maintain address lookups. In my case, I rely on Elastic IPs which while public visible also provide internal IPs when used within a security group.

Fortunately, I only needed to make one change: point the perl programs to the elastic IP instead of pointing to localhost.

That's it. Asynchronous news acquisition and analysis is on one node, while the database and web server are elsewhere. As evident, separating those two would be trivial too- just get another node, place the war in a web server there, futz with addressing and call it good. If it doesn't work, scrap it- you lost nothing other than the time it took to run your experiment.

There's no rocket science in any of this. But its heart warming that in reality it really only takes a couple of hours (to the un-initiated like me) to get this done. Contrast that with trying to do this if you had to work with your own hardware- you'd either have to buy some, or hope there's some lying around, or make a case with your hardware team. Then you'd have to hope that this pans out well- since if it doesn't you just sank your investment in hardware.

This is, admittedly an almost contrived example of why on-demand virtual provisioning is awesome. But I think I got lucky in that my components were so inherently separable. My initial tendency might have been to do something horrible like have the news acquisition/processing live within the scope of the same war that powered the web-end. One deployment/logs/build to worry about right?

I've been part of many decisions where I suggested or was persuaded to accept that it was ok to stuff yet another component into an already large ball of yarn. Invariably, all of these would get knit together and thus become one inseparable bundle of pain.

With virtualization being so easy and cheap, I wonder how much easier it might be to consider spinning up fresh instances for every new component you consider? Granted- its a pendulum swing, and might not always be appropriate. But, if you used that premise as a baseline assumption- how would that change the end quality of what you build, how it can scale, and how easy it is to maintain?

14 December 2010

chronicling something strange

I've been trying to get to Google Maps but keep getting flipped to Yahoo Maps. Trying to capture what happened in case others have experienced this in the past or are experiencing this now.

First off, this looks odd:

$ traceroute maps.google.com
traceroute to maps.l.google.com (98.136.42.132), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 10.414 ms 0.979 ms 1.009 ms
2 73.220.38.1 (73.220.38.1) 8.990 ms 8.228 ms 7.975 ms
3 ge-4-13-ur01.seattle.wa.seattle.comcast.net (68.87.207.65) 8.295 ms 7.303 ms 8.122 ms
4 be-70-ar01.burien.wa.seattle.comcast.net (68.85.240.101) 9.642 ms 11.061 ms 8.976 ms
5 be-40-ar01.seattle.wa.seattle.comcast.net (68.85.240.94) 10.064 ms 10.512 ms 9.978 ms
6 pos-0-0-0-0-cr01.portland.or.ibone.comcast.net (68.86.93.105) 14.582 ms
68.86.95.185 (68.86.95.185) 18.711 ms
pos-0-1-0-0-cr01.portland.or.ibone.comcast.net (68.86.93.109) 14.648 ms
7 pos-1-7-0-0-cr01.seattle.wa.ibone.comcast.net (68.86.85.109) 14.971 ms 13.591 ms 13.087 ms
8 te-3-2.car1.seattle1.level3.net (4.79.104.105) 14.353 ms 14.662 ms 14.563 ms
9 ae-31-51.ebr1.seattle1.level3.net (4.68.105.30) 25.328 ms 20.115 ms 17.439 ms
10 ae-7-7.ebr3.sanjose1.level3.net (4.69.132.49) 37.199 ms 38.963 ms 35.658 ms
11 ae-73-73.csw2.sanjose1.level3.net (4.69.134.230) 35.853 ms 41.441 ms 35.956 ms
12 ae-33-89.car3.sanjose1.level3.net (4.68.18.133) 32.992 ms 35.116 ms 34.360 ms
13 yahoo-inc.car3.sanjose1.level3.net (4.71.112.14) 33.677 ms 35.309 ms 35.993 ms
14 ae-0-d161.msr1.sp1.yahoo.com (216.115.107.59) 33.408 ms
ae-0-d171.msr2.sp1.yahoo.com (216.115.107.83) 80.014 ms
ae-1-d161.msr1.sp1.yahoo.com (216.115.107.63) 34.297 ms
15 et-17-1.fab3-1-gdc.sp2.yahoo.com (67.195.128.73) 37.080 ms
et-17-1.fab4-1-gdc.sp2.yahoo.com (67.195.128.77) 35.420 ms
et-17-25.fab3-1-gdc.sp2.yahoo.com (98.136.16.27) 35.792 ms
16 te-8-1.bas-c1.sp1.yahoo.com (67.195.130.112) 36.362 ms
te-9-1.bas-c1.sp1.yahoo.com (67.195.130.116) 34.918 ms
te-8-1.bas-c1.sp1.yahoo.com (67.195.130.112) 34.479 ms


Here's what an HTTP request looks like:

$ telnet maps.google.com 80
Trying 98.136.42.132...
Connected to maps.l.google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: maps.google.com

HTTP/1.1 200 OK
Date: Wed, 15 Dec 2010 05:10:15 GMT
P3P: policyref="http://info.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE LOC GOV"
Expires: Wed, 16 Mar 1966 12:00:00 GMT
Cache-Control: must-revalidate
Pragma: no-cache
Set-Cookie: _ygms=deleted; expires=Tue, 15-Dec-2009 05:10:14 GMT; path=/; domain=.maps.yahoo.com
Vary: Accept-Encoding
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8

10dd
(html escaping by me) HTML/HEAD/TITLE: Yahoo! Maps, Driving Directions, and Traffic.. and the rest of the HTML...


Will keep digging.. Wonder if the DNS address is right from where I am, or if my DNS server has been rogered.

$ nslookup
> maps.google.com
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
maps.google.com canonical name = maps.l.google.com.
Name: maps.l.google.com
Address: 74.125.127.104
Name: maps.l.google.com
Address: 74.125.127.106
Name: maps.l.google.com
Address: 74.125.127.103
Name: maps.l.google.com
Address: 74.125.127.99
Name: maps.l.google.com
Address: 74.125.127.105
Name: maps.l.google.com
Address: 74.125.127.147

[Update- back to normal]

nslookup says:
> maps.google.com
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:

maps.google.com canonical name = maps.l.google.com.
Name: maps.l.google.com
Address: 74.125.127.106
Name: maps.l.google.com
Address: 74.125.127.104
Name: maps.l.google.com
Address: 74.125.127.103
Name: maps.l.google.com
Address: 74.125.127.147
Name: maps.l.google.com
Address: 74.125.127.105
Name: maps.l.google.com
Address: 74.125.127.99

which looks like the original nslookup.

And now this works fine too:

$ telnet maps.google.com 80
Trying 74.125.127.99...
Connected to maps.l.google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: maps.google.com

HTTP/1.1 200 OK
Date: Wed, 15 Dec 2010 06:50:42 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=0b3839d36a39775f:TM=1292395842:LM=1292395842:S=ogeImsmEEtA3UI9c; expires=Fri, 14-Dec-2012 06:50:42 GMT; path=/; domain=.google.com
X-Content-Type-Options: nosniff
Server: mfe
X-XSS-Protection: 1; mode=block
Transfer-Encoding: chunked

1000


I wonder if I ran my nslookup query too late on the first sequence to catch the glitch.

25 November 2010

onward

My job at Gallup moved me to Omaha in 2003. Perhaps most importantly, I managed to find my wife here. My job treated me well. We found a great place to live: out away from everything, on a lake, with no neighbors, and 20 acres of wooded hills.





Earlier this month, I resigned from my job and accepted one in Seattle. So, now we begin our move..



09 August 2010

I think I got it right..

In my post "How exactly did Comcast win?", I concluded with:

It seems clear, from both the court ruling and the political stance, that the FCC is a operating from a position that needs to be revisited. Ironically, even Comcast seems to think so.


From Google and Verizon's recent joint policy proposal on an open internet, there's a clear push to get the FCC to have the authority to do exactly what they could not do with Comcast:

..because of the confusion about the FCC’s authority following the Comcast court decision, our proposal spells out the FCC’s role and authority in the broadband space. In addition to creating enforceable consumer protection and nondiscrimination standards that go beyond the FCC’s preexisting consumer safeguards, the proposal also provides for a new enforcement mechanism for the FCC to use. Specifically, the FCC would enforce these openness policies on a case-by-case basis, using a complaint-driven process. The FCC could move swiftly to stop a practice that violates these safeguards, and it could impose a penalty of up to $2 million on bad actors.

05 August 2010

An (old) term paper regarding Tor

If you value free speech, you should read about Tor.

After becoming enamored with it, I wrote an acceptable term paper on the project.

15 July 2010

Mailbox 2.0


Our new (custom built) mailbox, replete with paint. And yes- its a leaping dog, all this courtesy of Bruce.

Posted by Picasa

13 June 2010

Panoramic Experiment


Six different pictures shot with a Canon XSi, and an EF 50mm prime lens. White balance set to 6000K (cloudy). Net result, stitched with Hugin.

Organic things (like grass, and flowers and such) that tend to move here and there or have tiny edges that recast shadows in variant ways tend to cause a little bit of trouble, but Hugin does a great job in overcoming them.




16 May 2010

How exactly did Comcast win?


On April 6th 2010, a US Court of Appeals found that Comcast could not be held accountable for a ruling made by the FCC. For some following the whole process, this seemed pretty odd.

As a quick recap, here's how this began:
Initial reports of users having trouble with BitTorrent connections began to circulate on discussion forums around May 2007. Those affected appeared to be Comcast subscribers, and observers began speculating about the causes. A Comcast subscriber named Robb Topolski ran a tool called a packet sniffer3 while attempting to "seed" (i.e., offer to others for download) files on BitTorrent and discovered unexpected TCP RST packets that were causing inbound connections to his computer to die.

The EFF proved that Comcast was injecting packets into sockets that were crucial to BitTorrent traffic. If you've written code that worked over sockets, consider how much overhead you'd require to validate that the transport layer wasn't screwing you. The FCC got involved next:

The FCC determined that Comcast had violated the agency's Internet Policy Statement when it blocked certain applications on its network and that the practice at issue in this case was not "reasonable network management."

What happened next was quite interesting. Here's how Comcast came back (from the same Open CRS document above):

Comcast argues that the FCC does not have the authority to enforce its Network Management Principles and the Commission's order was invalid for that reason.

We now had the following scenario:
  1. The FCC called out a major ISP and said that they were in violation of reasonable network management practices
  2. Comcast responded by saying that the FCC had no jurisdiction to make such a claim and appealed the ruling from the FCC

Here's what Comcast's PR machine signed off with as they went to court with the FCC:

It’s truly sad that the debate around “net neutrality,” or the need to regulate to “preserve an open Internet,” has been filled with so much rhetoric, vituperation, and confusion. That’s gone on long enough. It is time to move on, and for the FCC to decide, in a clear and reasoned way, whether and what rules are needed to “preserve an open Internet,” and to whom they should apply and how. In launching the rulemaking, the FCC said that greater clarity is required, and we agree. Comcast will join many other interested parties in making comments to the FCC this week regarding its proposed open Internet rules. Our goal is to move past the rhetoric and to provide thoughtful, constructive, and fact-based guidance as the FCC looks for a way forward that will be lawful and that will effectively balance all the important interests at stake.
Comcast, the FCC, and "Open Internet" Rules: Where We Stand

Nonetheless, this went to court, and a US Court of Appeals found that Comcast was indeed legitimate in its challenge:

..the Commission relies on section 4(i) of the Communications Act of 1934, which authorizes the Commission to “perform any and all acts, make such rules and regulations, and issue such orders, not inconsistent with this chapter, as may be necessary in the execution of its functions.” 47 U.S.C. § 154(i). The Commission may exercise this “ancillary” authority only if it demonstrates that its action—here barring Comcast from interfering with its customers’ use of peer-to-peer networking applications—is “reasonably ancillary to the . . . effective performance of its statutorily mandated responsibilities.” Am. Library Ass’n v. FCC, 406 F.3d 689, 692 (D.C. Cir. 2005). The Commission has failed to make that showing.


It seems like a US Court just ruled that the FCC doesn't have the authority to tell an ISP not to tamper with subscriber traffic. I hope you're as stunned as I am.

On May 6th, Congressman Jay Rockerfeller and Representative Henry Waxman wrote to the FCC Chairman. Here's an excerpt from their letter [PDF] :

We believe that it is essential for the Commission to have oversight over these aspects of broadband policy, because they are vitally important to consumers and our growing digital economy. For this reason, in the near term, we want the agency to use all of its existing authority to protect consumers...

To accomplish these objectives, the Commission should consider all viable options. This includes a change in classification, provided that doing so entails a light regulatory touch, with appropriate use of forbearance authority.

In the long term, if there is a need to rewrite the law to provide consumers, the Commission, and industry with a new framework for telecommunications policy, we are committed as Committee Chairmen to doing so.

It seems clear, from both the court ruling and the political stance, that the FCC is a operating from a position that needs to be revisited. Ironically, even Comcast seems to think so.

As a populace, it seems like a great opportunity for American citizens that care about their network (which is part of a bigger network) to write their political representatives. Consider for a minute how similar these two are:
  • an ISP that has a monopoly on a subscriber base that chooses to manage traffic sending malicious packets down sockets to disrupt traffic
  • a country that chooses to regulate where your sockets can connect to
A democracy is only as strong (and smart) as the people that participate in it. Now is the time to learn and take a stance on what you think broadband rights, net neutrality and the regulation of the internet should look like. Get educated, start talking about it, and write to your representatives. New laws are going to be made and you've got a ringside seat in being able to shape them.

22 October 2009

Panoramas from Ljubljana and Dubrovnik

Dubrovnik Coast (4 images stitched in Hugin):

Ljubljana, on the river (6 images stitched in Hugin):

01 October 2009

Public/Private Key Math

Heavily sourced from: Cryptography and Network Security, Fourth Edition by William Stallings.

Key Generation

Pick two prime numbers: p and q
p = 7
q = 13
Compute n=p.q
n=p.q
n=7 x 13
n=91
Compute the Euler Totient of n: Φ(n) = (p-1)(q-1)
Φ(n) = (7-1)(13-1)
Φ(n) = 6 x 12
Φ(n) = 72
Pick an integer e, such that the greatest common denominator between Φ(n) and e is 1, and e is greater than 1 but less than Φ(n)
gcd(Φ(n),e)=1 and e>1 and e<Φ(n) gcd(72,e)=1  Choosing e: 5 
Determine the value of d using the formula:
de mod Φ(n) = 1
dx5 mod 72 = 1

Possible values for dx5 need to be 72*(some integer) + 1:
73 (won't work)
145 (that will work)
145/5 = 29

Plugging into:
dx5 mod 72 = 1
29x5 mod 72 = 1

d = 29
With d, e, and n computed, we have our keys:
Public Key:  KU = {e, n} = {5, 91}
Private Key: KR = {d, n} = {29, 91}
Encryption and Decryption

Now, we're ready to encrypt. Assuming plaintext M, we compute ciphertext using the formula: C=(M^e)mod(n)
Assuming plaintext M=10
C = (M^e) mod(n)
C = (10^5) mod(91)
C = 82
Compute 10^5 mod 91 via wolframalpha.

Decryption uses the formula: M=(C^d)mod(n)
C = 82 (from above)
M = (C^29)mod(91)
M = (82^29)mod(91)
M = 10 (that worked!)
Compute 82^29 mod 91 via wolframalpha

What's remarkable is how monstrous the math gets using relatively tiny prime numbers to start with. We started with 7, and 13 to create our keys, and our final computation (the decryption) required us to compute 82^29, which is:
31666620625027474232451213268613396669946986162166956032
Our keys, were of a really trivial bit-length {5,91}, and {29,91}. Consider that most asymmetric algorithms talk about keys that are of lengths greater than 1000 bits. Now consider what the size of the exponents may look like, and the corresponding products which need to be modulo divided.

10 August 2009

Dilbert meets Hobbes

Today (sadly), I was forced to use a laptop running Windows.




  • I connected a mouse to it (using a USB port)

  • The hardware drivers kicked in and the mouse started to work

  • I used the mouse to move, and click on a window

  • Windows informed me that a mouse had been detected and installed (after I used the aforementioned mouse to move and click on something)

  • This informative dialog stole the focus from the application I was using, after I used the mouse it so eagerly wanted to let me know it had found

  • I have dogs that are better behaved, and far, far more intelligent



Sigh. Pull my teeth out with rusted pliers and pour some salt on whatever is left. If you love eating shit, then..