Sunday, December 30, 2012

The numbers show C++ is in decline

I love C++. I think it's cool. I wish I knew it better than I do. I think C++11 is awesome and I recommend to people who want to write cross-platform iOS/Android apps that they use C++.

That said.

There are a lot of reasons not to use C++ for anything in userland other than, say, games. And even then I would say this is mostly a restriction of the platform and its APIs than anything to do with language advantages. You can manage memory off-heap when needed in managed languages.

This blog post documents reasons to not use C++ well. Most of that is debatable, and Microsoft is proselytizing a "C++ renaissance". I wish that were the case in a lot of ways, but the rest of the world is in disagreement. I present the following:



I think this reflects the kind of demand for C++ engineers you see out there in the world. And let's remember, this only samples the jobs that list C++ somewhere in the description. It doesn't mean the job has anything to do with programming C++.

C++ is fantastic for game programming and HFT. I think both of those careers are going to be fairly short-lived in the greater scheme of things, and the above reflects the way the job market is headed.

Thursday, December 27, 2012

The Unspoken Linux Future: Android

I like this blog called "Linux Rants" and I follow Mike's posts on G+. He posts semi-frequently (i.e. doesn't take over my feed) and I like reading the hardcore Linux angle towards tech topics.

Earlier today, he linked to this article in PC World called "Five Reasons 2012 was a great year for Linux". My response was simple: "how did Android only get one mention in this article?" For all of the advancements that Linux on the desktop has made in the past several years, none of that compares to Android. Let's look at the numbers.

  • Ubuntu has something like 20 million existing users. Android will add that number of new users in the next 15 days.
  • The article discusses "preloaded prevalence" and a handful of companies shipping PCs with Linux preloaded. Android must have dozens, many of whom are making billions shipping android. I can't find a definitive list but I counted at least 20 here.
  • The article also discusses "gaming acceptance" -- and I agree, it is a huge deal that Valve is beginning to support Linux and a great hedge for them -- however, if you compare the sheer number of games, or hours of games played, Android will again dominate.
  • Android has already been forked successfully by many hardware vendors into successful products. Nook and Kindle Fire come to mind.
I'm not saying this all to pooh-pooh the results that Ubuntu and others have put up there. I'm pointing this all out to get Linux fans (a group I include myself in, even though I type this on a Mac currently) to focus on the future. That future will be Android. At some point, the several million user base of OG Linux becomes secondary to the billion user base of Android (est. June, 2013). Or, if Tomi's predictions are correct, the two billion user base by 2015.

When something has that much mindshare, it permeates everything. That's how Windows became a viable alternative and then the mainstream for workstations (RIP SGI) and a very popular server platform. When a platform can offer vertical integration, that's a big deal, and something Microsoft leveraged very well. I think Android is going in the same direction for a slightly different reason...

So now imagine the future. Try even today. What are we using? Mostly cloud services. No need for corporations to run their own servers. Email is hosted by Google. Docs are Office 365 or Google apps. Even code is in the cloud. I made Github the standard for our engineering team. Reviewboard is a SaaS we pay for as well. Every possible chance, I push stuff we need up to a SaaS product or the cloud.

This is already the standard. When I hear about school districts wasting money for Exchange and Windows (or Mac), I seriously get angry. Take Google up on their offer to do this for you for free. Then just get educators Chromebooks or Transformers. Buy a few Windows boxes or Linux boxes for the educators who need it to run specialized software.

In that future, why do I need an OS like Linux as it's baked today? I don't. I don't need X11. I don't need chkconfig, mysql, etc. It's handy for developers, but even then I almost exclusively work on remote boxes as it is (the exception is when using IntelliJ).

So while I'm happy that Linux is making strides to be easier to use, it's like Windows 8: polishing the legacy path. There's no growth left there. I've tried installing Linux on Mac and using it a few times but have given up. I just don't care enough to deal with the driver crap.

Instead, I'll just bide my time for the laptop of the future. The laptop of the future is an improved Transformer or Chromebook. The desktop PC of the future is a Chromebox, "Androidbox" or something like it. And while there may be forks of Linux in the future specific for gaming (as it appears Valve is looking at with their hardware), to me, the massive userbase of Android and the hardware support there does make this a mutually exclusive choice. You can focus your open source development, OEM work, etc., on "legacy" Linux distributions like Mint and Ubuntu. Or you can focus on Android. I just don't see people choosing anything but Android and the web for big time open source efforts, maybe as soon as 2013.

Full disclosure: several months ago, I contacted a friend at Google about potentially porting X11 to Android in order to give a better path to making Android a full-fledged desktop OS. I ended up scrapping the idea because I figured that ultimately, another solution will be had for windowing on Android and for its native apps.

Thursday, November 22, 2012

What's up with Nexus?

I just saw a Nexus 7 ad on TV during the football game and was reminded to post this. The ad featured all three Nexus models but focused on the 7.

What the hell is going on with Nexus?

The N4 is poised to be the best selling Nexus ever. The community has been raving about this phone other than the missing LTE issue. The thing sold out in something like 29 minutes and has been unavailable ever since.

Just to put that in perspective, it launched on 11/13. So it's been sold out for 10 days now (since it was basically never available for non-F5-crazy mortals). Google has wasted nearly 25% of their window before Christmas with a sold out item. As the world's largest advertising platform, it's somewhat mind-boggling that they don't understand basic product launching strategies around Christmas.

So unless Google surprises everyone with a huge Black Friday or Cyber Monday push, I'm going to chalk this up as one of the worst gadget launches I've ever seen. If it's not available for Christmas, what was the point of launching it in November? Wait until January. Otherwise it's just a distraction and given the rate that technology obsolesces, by the time it's available it will be obsolete.

I'm really confused with the Nexus strategy. They have an amazing device out there for a bargain basement price that no one can buy. Every other comparable device on the market is $700 unlocked. You would think that since Google would want to make up those profits in volume, they'd be making and advertising the thing like it was going out of style.

The other thing I wanted to say is that the Pure Google experience I have with my Nexus is now one of the buggiest smartphone experiences I've ever had. 4.2 is even more of a disaster than iOS 3 on my iPhone 3G.

Many, many apps were crashing on day one of the 4.2 update. Not the least of which was the Gmail app, which is still crashing. If Google isn't testing their OS releases with the Gmail app, what are they testing it with? So imma going to throw down this QA rule and you can decide if it's a good one:

If the Gmail app ever dies with NullPointerException because of a new OS release, don't ship it.

Does that sound reasonable? I didn't buy the Nexus to be your beta tester. I bought it to get the first release of the pure Google Android experience, and with that, I expect some good QA. You have three devices on the market that were getting this release.

Compare this to Apple, who seem to be able to release their OS with dozens of carrier-specific builds around the world on the same day and have it be stable. Apple used to have some of the worst QA (well, they still have some of the worst on the Mac), now I'd say they have some of the best in light of what they accomplish with their iOS releases.

4.2 has all kinds of craziness. It lags like mad, which is super-noticeable when you're trying to use the new (and awesome, except for the lag) gesture typing. The Wallet app is screwed up and I've had to hand over my credit card in lieu of it. My Starbucks app lost all of its data. I'm forgetting more as well.

In advance of 4.2, I was prepared to commend Google on how they've iterated so significantly in the past year. ICS and Jelly Bean, plus Jelly Bean 2 (?), have some insane feature advancements for the platform. Android is easily the most advanced smartphone OS out there. And then this release took that all back to naught because of the bugginess. Even a hacked version of 4.1 on my VZW Nexus was more stable. What's up with that?

All of this really makes me wonder what's going on with this. I love Google, Android and Nexus. I know Google hasn't made a lot of money on Android and my mind is boggled now as to why they're wasting golden opportunities like launching the N4 at Christmas. I hope they surprise me this weekend.

Sunday, November 18, 2012

On the opportunity that Windows 8 provides

Windows 8 is a usability disaster by most accounts. Most recently, I read this article that details some of this from a study of a dozen users. Read it, it's very comprehensive.

I've been using Windows 8 since the day it released and it's obvious to me that Microsoft misstepped by trying to merge the desktop and the tablet. The problems are not just "getting used to it" type problems like the Office Ribbon.  There are deep, deep usability issues that will need to be rectified by either further innovation (hard) or by reverting to the old (easy, but embarrassing).

One of the most obvious usability nightmares occurs when I hit the Windows key and start typing the name of an app I want to run. In the past, it would open up a small entry box in the Start window, compete the app name and run it when I hit return.  In Windows 8, it brings up a full screen window and does the same thing. But let's think about this: you just replaced the entire screen with a big panel in order to find one app. When I hit return, it pops out of this experience to run the app, which is most likely back on the desktop I just came from since no productive apps are within Metro itself (nor should they be -- read the article above). "Jarring" is the nicest way I can describe this interaction.

Windows 8 has put Microsoft in a tough position. The next logical question is, how can one profit from it? Who out there can move quickly to exploit this opening?

Most people are going to contemplate the obvious choice: that the opening is for Apple, like, for example, this article by JLG. I disagree. The tablet is not a replacement for the desktop PC in the way knowledge workers need. Additionally, MacOS X is fairly unloved in the corporate environment. Apple hasn't nurtured the enterprise (compared to the consumer) and corporate IT isn't going to rush to replace PCs with Macs anytime soon. No major company I know of is thrilled at the idea of single-sourcing the hardware, which is what it would be to standardize on Apple.

Then one might consider Android. Android has the same issue as the iPad: it's not a replacement for a desktop PC. At least, not yet.. maybe someday. It at least supports a mouse, but is missing the kind of productivity apps one would need.

Chromebooks are interesting but can't support legacy PC apps.

So the question is, what plays out there could:

A) Leverage the investment IT departments have already made in PCs.
B) Allow IT departments to continue buying from their existing vendors like Dell, HP, etc.
C) Helps application deployment headaches.
D) Still slowly morphs companies away from their PC environment.

To me, almost everything that falls out of these points is going to be rallying around the web.

One play is focusing on making web applications more desktop friendly, then selling web services that can supplant Office, Exchange and so on. Gee, sounds like a good one for Google to take on. Mozilla could too, of course.

For example, Chrome's usability as a native application is pretty dreadful. Pretty much the only thing I can do is "pin" my Gmail tab, which is still easily closable when I didn't intend.

It should be built in to Chrome that I can create a regular OS-like desktop application for any webapp. It should get first-class behavior. It gets its own Dock/Taskbar icon, it gets real alerts, and so on. Make it a separate process with a distinct executable name if you have to.

Funny story: Microsoft already did this! They did it for IE 9. For some reason, the competition never caught on that this is a pretty good way to brand your web app on someone's taskbar. And yet, it could be taken so much further. Allow the application to change the native menu bar. Completely hide the fact that it's a web application. Isn't this what XAML was supposed to be? Mozilla and Google should be pushing on the same concepts.


The second play I think would be very wise would be towards tooling for the enterprise to replace all of its desktop .NET and VB applications with web-based ones. Just yesterday, I was talking to an engineer who described his company's goal being to make a JS framework that could be used for this kind of purpose in the enterprise. There are a lot of small companies rallying around the idea of HTML5 and Javascript end-to-end in order to solve these problems. But you know what big company could do great if they just played their cards right? Adobe.

Adobe should buy all of the companies, sponsor all of the open source projects doing this kind of work right now. VMWare gets what they need to do in their space. They're sponsoring several projects that can be used as PaaS. And when the day comes, they want to be the best at supporting those platforms in the cloud.

Adobe should be taking all of these projects and making the tooling around them excellent. Why do I still find it easier and faster to type JS, CSS and HTML into vim? Adobe needs another Flash success story for themselves. Right now they have these "Edge" tools that, frankly, look less powerful and less interesting than what I can get out of the Chrome developer panel.

So there you have it, a couple ideas of who and how one can benefit from Microsoft's Windows 8 screw-ups. The common thread is the web however. Nothing about iOS or Android make it clear how they could start replacing billions of desktop PCs and Office installations anytime soon. Web applications that more seamlessly integrate with the existing legacy platform... and making tools that your internal developers can use instead of .NET or VB6? Both of those seem a lot more direct.

Monday, October 22, 2012

Apparently, I'm wrong about the New Chromebook

My friend A-Rock ripped me about my last Chromebook post, saying I was reviewing something that wasn't out. It wasn't a review. It was discussion about whether there is a value proposition for the Chromebook.

Either way, I can own up to when I'm wrong, and I may be wrong about this Chromebook thing. It seems that the new Chromebook has plenty of interest. It's currently #1 on the Amazon store under laptops and tablets.

So I started considering more about who would be interested in this device. I had a couple thoughts.

  • College students at schools that are using Google Apps. It turns out that 61 of the "top 100" (not sure of metric) are using Google Apps. Yale, Northwestern, BU, and a lot more. If I were a poor college student again and didn't have a computer of my own, I would probably consider one of these.
  • K-12 students. It seems that districts out there are trying to get a laptop in every child's hands. It occurred to me that advanced districts might be web-based enough to make something like this worth it.
  • Companies that are using Google Apps or are otherwise entirely cloud based. I heard of a bank with >100K people moving to Google Apps recently. Chromebook could be a good device for my company's sales folks -- except for the lack of Skype and Go To Meeting. Although, even Windows-based organizations that use Citrix could benefit from using Chromebooks like this.
So, maybe there is a market for this device. With the "cloud storage" benefits, I can see how it starts to add up to having some real value.  Either way, I wish Google luck on this endeavor.

Thursday, October 18, 2012

The New Chromebook

I commend Google for hitting the $250 price-point on the new Chromebook. No matter what your opinion of the Chromebook, that's a cool achievement. I think their vision for this device is ultimately kind of like what Jeff Bezos' vision for the Kindle is: the thing will be free. We'll just hand you a Chromebook because we know once out of every hundred searches you do with Google, you're going to give us $8 by clicking on an ad for refinancing your mortgage (e.g.).

Except, as much as the geek/gadgeteer side of me wants to grab one of these just to have it, I cannot for the life of me figure out why I would.

Buying a disposable toy tablet like an iPad is one thing. With the iPad, you know what you're getting into... this is a device that will never supplant your laptop or your phone. It's more like a slick toy/consumption device. The kids love it. Yeah, it's nice for browsing around while you have "The Bachelorette" on in the background. Neat. But even if the device is fairly unproductive overall, there is a clear place and time for that device to be used. It fits a niche.

The Chromebook, not so much.

I'm typing this on a Macbook Pro Retina (I'll review that at some point). My wife uses a Lenovo Thinkpad. My prior 4 laptops have been Macbook Pro 15" supplied by workplaces, and before that, high end Dell and HP laptops. Even my kids use a 2008 Macbook Pro I bought off a friend.

I would never buy a Chromebook because what's the case for me to use this? I have laptops coming out of my ears. I don't need a less powerful one.

So... what's the use case for this Chromebook? What is the value proposition for the consumer?


  • Netbooks are (were?) around $250 and ran Windows and Office, this doesn't.
  • I bought a used Macbook for $75 from my friend.
  • There are Macbook Pros galore on Fleabay for under $250.
  • Don't even get me started on the cheapness of desktop computers, Linux, etc.
The consumer can buy a Chromebook for $250 and be locked into Google's web based life, or they can choose any one of these computers that aren't limited to browser-life.

So let's go back to this value proposition... what are Google's selling points here? Paraphrasing some:

  • It's always up to date. So is Chrome on my kids' $75 Macbook Pro. And Office. And Firefox. And iTunes.
  • It's cheap. So what, so are the above options.
  • Always connected. So is my Macbook Pro.
  • Virus-free. Frankly, so is Windows. At least for me.
  • Boots up in less than 10 seconds. All of my machines are kept in sleep mode, all the time. And so they wake up instantaneously.
I, for one, actually believe in Google's vision of a completely online future and the browser being a huge part of applications in that realm. I think the browser is one of the few hopes we have towards avoiding another decade of something like being MFC experts, just this time with AppKit.

Even with a totally online world, the Chromebook struggles to have a customer.  It's not shiny and cool like the tablets are.  It's not as useful as a Mac or Windows laptop -- which a huge lot of us have already anyway.  Is there anyone on this planet really asking for a $250 laptop that does less because it's all web-based?  $250 is out of reach for the very poor, and the product is an unnecessary one for those who can afford it.


Tuesday, October 16, 2012

On Dart

Dart was announced a year ago today. They celebrated by pushing out their "M1" (aka arbitrary milestone) release of their SDK today with a lot of cool tools and such.

The reaction is mixed. Many, including a commenter on HackerNews who I believe works at Google, ask "What is the point of this? Why is Google wasting their time with something no one will adopt?" Others think it's a significant development. One commenter on Reddit wrote "Dart- the language for the silent majority of programmers."

I agree with the second commenter, here's why.

"The silent majority of programmers" are the people are working in statically typed languages every day. C, Java, C++. They're not top commenters on Hacker News, they're not at Hackathons or Mongo meetups. These are the people who are going to lock into something like Dart and use it. Why do you think so many people took up Google's last statically-typed-browser-language-thingy GWT? It gives structure for programmers who are not used to dynamic languages. Tons and tons and tons of contractors I've talked to over the past couple years have used GWT in the enterprise.

For myself -- someone who actually is used to working in dynamic languages -- I like Dart because I recognize that Javascript is too unwieldy for the kind of very large web application development that is coming down the road. At some number of lines of code -- I have no set number for this, but it eventually happens -- Javascript becomes just too difficult to work with. It's so dynamic and unstructured that tooling cannot help you make sense of what's going on in the program. Can you imagine a million lines of Javascript? Me neither.

Big software engineering companies like Microsoft and Google seem to recognize this shortcoming. Dart and Typescript are both attempts to help correct this with the biggest feature being static type checking. Typescript looks good too, but Dart is a more ambitious attempt that includes a new VM, standard libraries and very robust tooling.

Look at it this way. If Javascript is C, Dart is C++. The goal is to add better type safety and more structure to better deal with larger programs. C++ was compiled to C for a long time via CFront. This is no different. For the short term, Dart will compile to JS. If Dart catches on, then the Dart VM will become more widespread. (The Dart folks might not like me comparing their language to C++ but C++ is a hell of a successful language and IMO a decent model to work from.)

Probably the most absurd thing I read today though is the notion that Google should be simply making a new cross-browser VM like the CLR or JVM and open sourcing that. This has a more difficult adoption path than Dart's proposition right now, where you can use Dart and ship it to any browser. And besides, it has been tried. Heard of Silverlight? That was the CLR, embedded into web browsers. That should have been the holy grail. You could write in Ruby, Python, and C#... in the browser... with .NET libraries available ... Microsoft made it available on all browsers ... and no one, anywhere, ever cared or used it. Maybe in 2004 it would have mattered if Google went that route, but not now. That ship has sailed.

So kudos to the Dart team. Nice job. It looks good.

Saturday, October 13, 2012

On Nokia


Tomi posted a another good article on Thursday.  I love Tomi.  He has done a great job of illustrating how Stephen Elop has destroyed Nokia.

However, Tomi is a Symbian apologist and has been trying to prove for two years now if Nokia had just stuck with Symbian and developed MeeGo, everything would be fine. I agree with him that the burning platform memo destroyed the Symbian market overnight, but I think most here will agree that Symbian was a dead end strategy and Android would have eaten that as well.  As a counterpoint to his"it was growing in 2010" belief:  RIMM was also seeing growth during 2010.  How are they doing now, sticking to their strategy from 2010?

He also fails to mention that Symbian was "winning" marketshare because it was installed on candybar phones and called it a "smartphone".   I tried these phones. It was a horrible user experience and I doubt that many actually used it as a smartphone.  On the MeeGo front, I tried to play with MeeGo in late 2010.  It was a disaster.  I couldn't get any kind of developer environment even running, much less do anything with it.  I believe I mailed the developers and asked them "how the hell do I get this to work?"

Elop was right to think a new strategy was needed.  Except, with a company as large as Nokia, you can't bet the whole company on someone else.  That is a great plan for a small company.  Go get Apple's runoff money by betting everything on making widgets for the App Store.  Whatever.

But for Nokia?  A company valued at $43 billion when Elop wrote his "burning platform" memo?  In 2007, valued at $167 billion?  That's insane.  Yet, that's what Elop did.  He bet Nokia's future on Microsoft's success.  You cannot risk that kind of shareholder value on someone else, even Microsoft.  Right now, the most recent Lumia is at risk of not shipping because Microsoft can't get the software together in time.

Now, Nokia is releasing fantastic hardware and valued at just $9 billion.  Elop must be fired before the company folds in on itself completely.  He traded one loser (Symbian) for another (Windows Phone), with apparently no hedge.  HTC, Samsung, LG -- they all adopted Android while keeping their Windows Mobile / Windows Phone lines in place.  Nokia threw it all out and bet everything on Windows Phone 7 before either Nokia or Microsoft had anything shippable.  I can't figure out if Elop is a Microsoft mole--trying to save Microsoft from the outside or make it so Nokia can be acquired by MS--or just an idiot.

And if the Nokia board has not demanded the Lumia line be prepped with Android at this very moment to release by the end of the year, they should all be relieved as well.  This whole thing is idiocy.  Microsoft should probably just buy Nokia, though I fear for Tomi's health if they do so.

Wednesday, October 10, 2012

The Coming Mobile Backlash?

I've had an iPhone or Android device on me at all times since 2008. I have owned 5 tablets in the past 12 months. I'm no stranger to the "Post-PC" era.

And yet, sometime last year, something clicked.  When I use my phone or a tablet or anything that's supposed to be "post-PC", I wish I had a PC.  Mac, Linux, Windows... even BeOS, I don't care, just something with a normal keyboard, a big screen, a mouse and multitasking, overlapped-window OS.

Someone else out there feels this way too, right?  You're trying to type on tiny screen keys and even with the auto-completing dictionary you can't type even one-third of the speed or accuracy as you could on a real keyboard.  Or you're trying to browse the web and that awkward focus/refocus thing just to see a block of text.  Or you've been harassed by a website enough to install their native app only to find out it's buggy, lags behind the real site in speed and functionality.  Or you've got that iPad awkwardly set up on its foldable case at a coffee shop, enjoying the 4" screen area not covered by a keyboard like a TRS-80 model 100.

In the pre-slab-phone era -- aka Blackberry/Treo era -- I used to make fun of Blackberries as "Great, Awesome, Thanks" devices.  People I knew who used them (mostly my managers at a large games company) typed replies to email with pretty much one word:"Great", "Awesome" or "Thanks".  And now, I'm one of those people with my 4th generation slab phone Galaxy Nexus.  How far we've come.  I am paying $200-$300/yr for these devices and $400 for a data plan where, most of the time when typing a reply, I long to be sitting at a desk with a keyboard and mouse.  So I just type "Thanks", "Awesome" or "Great".

Yeah, it's great to be able to have a device with you everywhere.  I also have my laptop with me everywhere, which is about ... this is just an approximation ... 1,374x more useful than any of my tablets or slab phones have ever been or ever will be.  So far, the truly useful things I've experienced with my phone are maps and Google Now.  Everything else has been kind of meh.  And now "the desktop" is being eroded by the trends in mobile because of this Post-PC thing.  The worst case scenario, to me, is Microsoft's proposal with Windows 8:  all the advantages of a PC form factor with a walled garden of application deployment through an app store.

I don't want another tablet.  I don't want another phone.  I think a lot of people looked at the iPhone 5 and said, "neat, now what?"  Seriously:  now what?  What can we do with the latest iteration of any of these devices that's an improvement over what I do at my desk every day?  The PC revolution was about business, learning, workplace efficiency, and creativity.   Remember Lotus 1-2-3, or AppleWorks?  Those were like "WTFOMGHOLYWHAT".  This mobile revolution is, what?   Games?  Texting?  NFC?  When I try to pay with my phone, it's slower than using a credit card.  The pictures are worse than my camera, phone calls worse than my original RAZR flip phone and everything else worse than my laptop.

So as you can tell, I'm having a bit of a backlash against mobile, and I wonder if other people will too.  I suspect so.  Once they get terribly bored of staring down at their phone on a BART platform, checking Facebook for the 50th time, then realizing that typing a blog post about how annoying phones are would be torture to actually type on that phone, maybe people will start to conclude that all the claims of the Post-PC era are kinda premature.

And with that, I'm going to go grab Xcom: Enemy Unknown off of Steam.

Sunday, July 15, 2012

ADB can't see Kindle Fire

So you've installed Kindle Fire Utility and it still won't see your Kindle Fire.  Try this magic:


  • Open Device Manager
  • Open the "Android Phone" entry
  • Right click whatever's under Android Phone ("Android ADB Interface", or "Android Composite Interface")
  • Go to Update Driver Software
  • "Browse My Computer for Driver Software" ... "Let me pick from a list of ..."
  • Turn off "Show Compatible Hardware" checkbox
  • If you right clicked "Android Composite Interface" above, try installing the driver for "Android ADB Interface".  Or vice versa.  This seemed to do the trick for me.

Sunday, May 06, 2012

Jersey Tests with Embedded Jetty and Spring

One of the bittersweet things about using Java is that there's a library for everything.   The good is that if you need to do something, there's a library.  The bad is that the documentation is typically terrible, and it can take hours (or days!) to figure out how the hell to make something work.  Today's lesson in this is testing Jersey services when you're deploying on Embedded Jetty and Spring.

The problem of using this combo is that we don't use web.xml for configuring things, it's all done with Spring.  And I want to test with InMemoryTestContainer.  So how is it done?

Let's say you have this service:


1    package com.trimbo.web.api; 
2     
3    import freemarker.template.Configuration; 
4    import freemarker.template.DefaultObjectWrapper; 
5    import freemarker.template.Template; 
6    import freemarker.template.TemplateException; 
7    import org.springframework.context.annotation.Scope; 
8    import org.springframework.stereotype.Component; 
9     
10   import javax.ws.rs.GET; 
11   import javax.ws.rs.Path; 
12   import javax.ws.rs.Produces; 
13   import javax.ws.rs.core.MediaType; 
14   import java.io.IOException; 
15   import java.io.StringWriter; 
16    
17   @Path("/jobs") 
18   @Component 
19   @Scope("prototype") 
20   public class TestService extends JerseySpringServlet { 
21       @GET 
22       @Path("/") 
23       @Produces(MediaType.APPLICATION_JSON) 
24       public String index() { 
25           return("{ 'a': 'test' }"); 
26       } 
27   }  

Then you have this configuration in Spring
1    xml version="1.0" encoding="UTF-8"?> 
2    <beans xmlns="http://www.springframework.org/schema/beans" 
3           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
4           xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> 
5        <beans xmlns="http://www.springframework.org/schema/beans" 
6               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
7               xmlns:context="http://www.springframework.org/schema/context" 
8               xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> 
9            <context:component-scan base-package="com.trimbo.web.api" /> 
10           <bean id="JettyServer" class="org.mortbay.jetty.Server" 
11                 init-method="start" 
12                 destroy-method="stop"> 
13               <property name="connectors"> 
14                   <list> 
15                       <bean id="connector" class="org.mortbay.jetty.nio.SelectChannelConnector"> 
16                           <property name="port" value="8050"/> 
17                           <property name="maxIdleTime" value="30000"/> 
18                           <property name="acceptors" value="10"/> 
19                       bean> 
20                   list> 
21               property> 
22               <property name="handlers"> 
23                   <list> 
24                       <bean class="org.mortbay.jetty.servlet.Context"> 
25                           <property name="contextPath" value="/"/> 
26                           <property name="servletHandler"> 
27                               <bean class="org.mortbay.jetty.servlet.ServletHandler"> 
28                                   <property name="servlets"> 
29                                       <list> 
30                                           <bean class="org.mortbay.jetty.servlet.ServletHolder"> 
31                                               <property name="servlet"> 
32                                                   <bean class="com.trimbo.web.api.JerseySpringServlet" /> 
33                                               property> 
34                                               <property name="name" value="jersey_api"/> 
35                                           bean> 
36                                       list> 
37                                   property> 
38                                   <property name="servletMappings"> 
39                                       <list> 
40                                           <bean class="org.mortbay.jetty.servlet.ServletMapping"> 
41                                               <property name="pathSpec" value="/api/*"/> 
42                                               <property name="servletName" value="jersey_api"/> 
43                                           bean> 
44                                       list> 
45                                   property> 
46                               bean> 
47                           property> 
48                       bean> 
49                       <bean class="org.mortbay.jetty.handler.DefaultHandler" /> 
50                       <bean class="org.mortbay.jetty.handler.RequestLogHandler" /> 
51                   list> 
52               property> 
53           bean> 
54       beans> 
55   beans>

And here is your glue class for that JerseySpringServlet, which injects the App context.
1    package com.trimbo.web.api; 
2     
3    import com.sun.jersey.spi.spring.container.servlet.SpringServlet; 
4    import org.springframework.beans.factory.annotation.Autowired; 
5    import org.springframework.context.ApplicationContext; 
6    import org.springframework.context.ConfigurableApplicationContext; 
7     
8    public class JerseySpringServlet extends SpringServlet 
9    { 
10       @Autowired 
11       private ApplicationContext applicationContext; 
12    
13       @Override 
14       protected ConfigurableApplicationContext getContext() 
15       { 
16           return (ConfigurableApplicationContext) this.applicationContext; 
17       } 
18   } 
Then this would be your test class that uses InMemoryTestContainer
1    package com.trimbo.web.api; 
2     
3    import com.sun.jersey.api.client.ClientResponse; 
4    import com.sun.jersey.api.client.WebResource; 
5    import com.sun.jersey.test.framework.AppDescriptor; 
6    import com.sun.jersey.test.framework.JerseyTest; 
7    import com.sun.jersey.test.framework.WebAppDescriptor; 
8    import com.sun.jersey.test.framework.spi.container.TestContainerFactory; 
9    import com.sun.jersey.test.framework.spi.container.inmemory.InMemoryTestContainerFactory; 
10   import org.junit.Test; 
11   import org.springframework.web.context.ContextLoaderListener; 
12    
13   import static junit.framework.Assert.assertEquals; 
14    
15   public class TestServiceTest extends JerseyTest { 
16       @Override 
17       protected TestContainerFactory getTestContainerFactory() { 
18           return new InMemoryTestContainerFactory(); 
19       } 
20    
21    
22       @Override 
23       protected AppDescriptor configure() { 
24           return 
25                   new WebAppDescriptor.Builder("com.trimbo.web.api") 
26                           .contextPath("/") 
27                           .contextParam("contextConfigLocation", "classpath:service-spring-config.xml") 
28                           .contextListenerClass(ContextLoaderListener.class) 
29                           .build(); 
30       } 
31    
32       @Test 
33       public void testIndex() throws Exception { 
34           WebResource resource = resource().path("/jobs"); 
35           ClientResponse resp = resource.get(ClientResponse.class); 
36           assertEquals(200, resp.getStatus()); 
37       } 
38   } 
This basically configures an App Descriptor with the prefix of our package, the "/" path in web, and uses our spring config as the context.  Note that the TestContainerFactory is our InMemoryTestContainerFactory.
Basically I'm writing this down for myself.  Hopefully it helps someone else too.

Sunday, April 15, 2012

Trying to be cute harms your startup

Yet another MongoDB user tosses it under the bus.  This Hacker News thread has more info.

Their article starts off with "This week marks the one year anniversary of Kiip running MongoDB in production"

It's April, 2012.  I just have to wonder, did the guys at Kiip.me not read all of the criticism of MongoDB in April, 2011?  Urban Airship had already said they were moving back to Postgres.  If that's not early enough, how about April, 2010?  I tried using Mongo at the beginning of 2010 and exchanged private emails with their dev team about virtually all of the concerns listed in their blog post in January, 2010.  Who at Kiip.me missed the memo that Mongo is using system paging for persistence, doesn't have durability by default and has a global write lock?

But I guess what we can just say here is that yet another startup tried to be cute, then ended up their time dealing with the cute new tech that broke.  When I say "try to be cute" -- I mean that startups choose this tech because they think it attracts recruits and attention, or saves them time to market because it's "schemaless".

Yet, has anyone heard of kiip.me other than people reading their post about Mongo?  I haven't.  That seems like a sucky kind of failure.  They spent their time wrangling Mongo instead of building a product that got their name out there because of the product itself.

From time to time on HN or elsewhere, people ask "what tech should I use".  The answer in my mind is "Java, Mysql, Apache/nginx".  So here goes:

Just use Java, MySQL, Apache, for everything web and server related*


(I'll accept the following substitutions:  C# (for you Windows folks), Postgresql/Sql Server/Oracle for Mysql, Nginx for Apache.  If you have to be all dynamic and stuff, CPython 2.x.)

It's so boring, I know.  As many of my faithful readers know, I've historically been the one out there trying out all of the new hotness.  How can I be the one proposing to use Java Beans?  XML configuration files for Spring?  Barf.  It makes your startup less cool because you're using such boring technology.  Right?  RIIIIIGHT?

You know what's cooler than Scala, Clojure, node.js and MongoDB?  A billion dollars.

On the languages:  LinkedIn, Google, Netflix use Java.  Instagram, Youtube, Slide, Dropbox use CPython.

Databases:  Adwords, to this day AFAIK still runs on MySQL.  Facebook, MySQL.  Slide, MySQL.  Youtube, MySQL.  Stop saying MySQL doesn't scale.  Scaling is hard no matter what you use.  MySQL is a well-known quantity, good and bad.  Just use it.  Or use PostgreSQL, which is what Instragram did.

Web servers:  apache and nginx serve 99.9% of everything unless it comes from Microsoft.  So just use them instead of trying to be cute.

Beyond just web apps, given my experience with Java over the past year, where I chose it for a non-web-server project at work, I'd default to using it for any server-related thing I needed to write.  Only in extreme circumstances would I go with C++ or C -- probably limited to writing front end web server (e.g. nginx) or database.  Even if I was writing an MMO, I'd use Java for the entire server stack.  I know my game friends will laugh at me for that, but I'll stand by that claim.  It's so boring and yet so functional and fast.

On the topic of "schemaless"

I'm tired of this development-trope of "schemaless" databases like Mongo.  I'll let you in on a secret:  There's no such thing as "schemaless."  

You have a schema somewhere, whether you like it or not.  Your code defines it, or your database defines it.  If you store to disk with protobufs, your IDL defines it.  The other day on HN and on Twitter, I postulated that on a long enough timeline, the probability that your data needs to be accessed by more than one application goes to 1.  If you think just an API can deal with this, consider if you wrote a monolithic application that you now want to split into separate services in a new framework/language.  Suddenly, you're doing a ton of code refactoring where a database schema and views could have solved it easily.

RDBMS were developed over the last 50 years to handle this exact scenario.  Views, triggers, stored procs, constraints:  that's what they're all for.  If you take the same seriousness to developing a database as you do towards developing code, this can all be clean, efficient and manageable.  Don't like doing it yourself?  Hire a DBA who has a clue.  But this whole tired schemaless thing is ridiculous.  If you think schemaless is good, build your application just storing JSON to individual files on disk and let me know how it goes.  It's essentially the same.

Tuesday, March 20, 2012

Software is not glamorous

You know how pundits talk about how glamour magazines are ruining people's image of themselves?   That it contributes to eating disorders, etc.  Every so often, there's a campaign to undo some of the damage.  Stars with their makeup off, or whatever.   I have no idea whether it's true that glamour magazines have that effect.  I just know people on TV claim it from time to time.

The point of this post is that that effect is true for software.  

Larry Ellison once said (in reference to cloud computing) "The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?"

Never,  Larry. Never.  Because there's always a place to sell people glamourous fantasies in any market, most especially a fuzzy area like "computing."  You know this.  Oracle is one of the chief innovators of this sales pitch.  Oracle is a brand that thrives on the idea of fashion:  that a database (really really expensive database) is the answer to all of your data problems.  Your scaling issues will magically go away if you use Oracle.  Because a DBA told you so, or because Visa uses it and, hell, they do a lot of volume, right?

But I'm not here to talk about branding for database servers or other enterprise solutions, I'm actually here to talk about software creation: also known as "software engineering", "programming", "coding", "hacking" and whatnot.  That's where fashion has gotten out of control.  

If you're like me and try to keep up with the styles of the time (like an onion on your belt), you probably read Hacker News or Proggit or something along those lines.   Even though they have so much stuff that is uninteresting to me (daringfireball reposts, bleh), I've never been able to keep a steady enough stream of geek news without including those two in my daily rounds of the 'net.  

In any case, they're completely filled to the brim with stories/fantasies of glamourous software escapades.  Wild tales of NoSQL magic, using Go for some one-off server, the latest home-grown solution of the week that does what last week's did except slightly better, oh, and of course, the functional programming unicorn.  On and on and ON it goes.  Day in and day out, someone tells us that they really benefitted from this amazing new face cream language or library they found that all of 3 people in the world have ever used.  Or they checked this thing into github -- which will surely be the only checkin of it ever, but who cares, it's awesome -- that really did something neat in 30 lines of Scala!   Then the comment wars begin and one guy says Go is amazing because of goroutines, another guy says Go is shit because it doesn't have generics.

Enough! 

It's like the glamour magazines I started this article with.  Jessica Alba is glamourous because she's Jessica Alba.  Buying her brand of makeup doesn't make you glamorous.  Everyone knows this but it's hard to remember it when you look at her picture and she's so damn glamourous, you want that brand of whatever she's selling.

Just like that example, there is an illusion of glamour in software.  We look at small code examples of Haskell and drool over it.  We see the raw write speed in a test bed of MongoDB and wonder why we battle MySQL all day long just to store some stupid virtual junk purchases for our Facebook game.  Our PHP site takes 3 seconds to render a page, so Facebook's Hiphop would really help, right?

The problem is, even if you applied all of these fantasies in unison, software still wouldn't be glamorous.   The reality spectrum is far from that under the pressures of commercial enterprise.  A couple examples:

* Code for product.  If there is any kind of product involved,  you'll be getting between half the time that should be spent on a piece of code to a tenth of the time.  Less even?  Either way, you don't have time to learn something new, do something new, you just need to code it now in whatever tools are handy and can work.

* If there is any kind of customer involved, then you'll be writing to whatever spec they want.  Writing a library for people to use across platform, binding to different languages?  Guess what?  You're writing that in C.  End of story (unless it's JVM, then it's 99.99% probably Java if you want contributors).

Often you'll face both of these, or more situations I can't think of that make software not as much fun as the fantasy you've been sold at a meetup or hacker news.

Just an example:  when I was in college there was a resurgence of interest in Smalltalk.  One my professors, maybe, told me that hedge funds were starting to use Smalltalk to do algorithmic trading because it gave them an edge in making all of these esoteric algorithms.  But now, those hifalutin ideas of impractical fantasies are long past.  What do you see people doing trading in?  If they're doing HFT, they do it in C++.  If they do algorithmic trading, they do it in VB or Excel or something.   They have too little time to develop too much with too many external dependencies to screw around with Smalltalk.

The best you can hope for, and what you can aim for within yourself, is what I'm going to refer to as Beautiful Hackery.

Beautiful Hackery is what the greatest software minds I've ever met have been able to pull off when shipping software.  Yes, they wrote it in a boring language like C++ or Java.  Yes, they wrote this software in a hurry, because that's what was required by the schedule.  They made design decisions that weren't the greatest, but they worked well in the context of the schedule and the features needed.  They also did this amazing hack to ship this feature in a way that wasn't a total kludge.  It may even say // TODO: HACK in the code, but when you look at it, you think it looks damn elegant.  That's because those guys are so good, they know it's wrong and they still do the wrong thing right. 

And that's the thing you really need to be able to get excited about.  You wrote just enough code in a language you hate, in an impossible deadline situation, to make something work just well enough to ship, and when you look at what you checked in later, you realize that hack was actually a good idea.  That's a great feeling, if you can get it.

Aspire to that, my friends.

Saturday, February 18, 2012

Fixing The Failure that is G+

Don't believe Google's hype:  Google Plus is a huge flop.

You need to look no further than my own timeline to see it.  I have 287 people in my circles that I follow. Most are extremely active on other networks - Facebook, Twitter, etc..  A large number of them have Android phones, so are presumably automatically uploading their photos and videos for sharing as I do.

And yet, with those 287 people I follow, there is exactly one update in my timeline this afternoon that's not from myself sharing with other people.

On Facebook, I have 500 friends and have received dozens of updates during that time.  On Twitter, I follow 187 people--less people than on G+--and have two dozen updates.

The stat that Google keeps giving is "number of people on G+".  Sorry, number of people on G+ doesn't matter.  Everyone with a Gmail or Picasa account is basically being opted in, and no one uses it.  Most of my IRL Google friends don't even use it.

G+ is a huge failure.  The biggest social network no one uses.  It's so obvious it's not even funny.  They've been trying to do what Twitter has done and what Facebook has done -- i.e. being feature equivalent.  G+ is proof that that plan doesn't work.  How many times has Microsoft tried this strategy and failed? (Bing?  Zune?)

And the saddest part is, I actually wish people would use it.  I prefer G+ to Facebook for the most part because I'm a Google-fiend.  I prefer the mobile app by far (the Facebook app is all kinds of jacked up on my Galaxy Nexus).  But the question is, how does Google make it right?

Two answers:

First, stop taking away the services we like and folding them into G+.  We liked PicasaWeb.  But now that's it's gone, my family entirely switched to using Smugmug.  Making this change had the opposite effect than what Google intended.

The second answer is focus on the things you have that are novel.  The most novel feature of G+ so far is Hangouts.  It's what draws people to G+ more than anything:  Hang out with Obama, etc.  Figure out how to rally around this with the audience you have.  What about tying it into Google Voice?  What about making the quality crazy-high and taking on Cisco?  What about doing presentations with it?  THINK.  Think of things you can do to rally around this, because no one is looking at people's non-updates on the thing and this is the ONLY novel thing G+ has.

The other thing Google shouldn't be doing is putting G+ in everyone's face every day -- at least for now.  It's almost counter-productive when people are sign up and then never use something.   In their minds, they tried it, didn't like it, and getting them to come back is even harder than having them come the first time.

In the mean time, I'll have to suffer through Facebook's completely broken Ice Cream Sandwich experience.

[Update]:  As if you need any more evidence, Google has now added "What's hot on G+" entries to my timeline that I didn't subscribe to.  Because there's such a dearth of updates, they have to throw in stuff.

Tuesday, February 14, 2012

Big Data / Small Data

"Big Data" is in the New York Times.  Run for your lives.

The first thing I ask a candidate who says they want to work on "Big Data" is "What Constitutes Big Data?"

They'll throw out a number like "10GB" or "5PB".  Both are ridiculous answers.  I'll let you in on a secret:  there is no right answer to this.   People answer this question based on personal experience.  They'll have just been at a company with a  300GB SQL Server installation is creaking under its weight, so 1TB becomes "Big Data" in their mind.

There are really two axes to look at:

  • Size of the data in terms of disk or memory
  • Complexity of analyzing that data [EDIT:  the complexity of things you want to know]
Most problems don't challenge both of these axes.  Furthermore, most people confuse the two.

Size is not usually a problem.  As Ted Dziuba points out eloquently as usual, most processing is not complex, even if the data is large in size.  If you need to know if treatment A or treatment B of your Facebook game sold more virtual junk, it's just not that hard to figure that out.  You can use grep, cut, sort and uniq to figure that out.

I jokingly posted to Twitter today that I'm working on my Small Data skills for this reason.  I'm helping out with some analytics for some content optimizations, but am using pure unix and a simple Python script to pull it together.  No Hadoop.  No Map Reduce.  Just not needed here.

Complexity is really the major problem to tackle.  Try making some sensible decision based on the information at hand.  The size of the data collected is a crutch in order to avoid requiring real intelligence.  A "machine learning" algorithm will require 10,000 keyword searches to deduce what kind of person you are, but a human brain might require just one keyword.  Or even just by lookin' at you. 

Bottom line:  learn to differentiate "big" data from just data.  Chances are that you're probably working with regular, boring, small data.  Embrace traditional data marts if you have to for historical analytics.  Then just use the Taco Bell techniques that Ted Dziuba describes above. 

Monday, January 30, 2012

Pentaho won't launch on MacOS X

Download Pentaho, unpack it.  You double click "Data Integration 64-bit" and nothing happens.

When you try it on the commandline, it gives you:

LSOpenURLsWithRole() failed with error -10810 for the file /Users/trimbo/Downloads/data-integration/Data Integration 64-bit.app.

The solution is to give the JavaApplicationStub file execution permission.

chmod +x ~/Downloads/data-integration/Data Integration 64-bit.app/Contents/MacOS/JavaApplicationStub

Now you can double click the icon and it will launch.

Hopefully this helps someone.

Saturday, January 14, 2012

Why I didn't go work on Facebook games

Many people were surprised when I left the games industry last year and moved over to a completely different kind of ecommerce engineering position.

Some background.  I started out of college working on computer graphics (CG) for commercials.  This was in 1995, and a lot of money was made doing this.  Most commercials with any kind of CG in them would cost $500K, and commercials that were entirely CG would be $1MM plus.  These numbers are obscene today, right?   And I *loved* doing commercial work.  Commercials were fun, 4-8 week projects that you could just rip through and move on.

Dreamworks had an arm that did commercials, believe it or not -- and I asked Jeffrey Katzenberg if he would invest in the business more.  This was in 2000 or so and were doing very well with some spots for Intel and Visa, earning a good profit. But shortly after I left that company, he killed the whole division.  He made the right decision in retrospect though it was growing at the time.  The reality is that those margins began to drop, fast.  By 2002, when I left commercials for good, you started to see home PCs capable of doing real time editing of standard def video.  Within a few more years, HD could be edited on a home PC. The whole idea was commoditized.

Reflecting on that over many years it became apparent to me that I needed to always watch for industries becoming commoditized.  This is what Katzenberg saw happening in commercials and in film effects.  I did another gig in film effects, but the next opportunities were all overseas (ruh-roh, again).  So I moved into video games.  And now games are doing that.  Here's why.

Let's start with Facebook games.

First of all, Facebook games are almost never games.  They are designed to be obligations, like Tamagotchi or SeaMan or Animal Crossing.  One of the best essays ever written about Farmville is this one, which outlines theories on what defines a game, and why Farmville is not one.  But let's pretend they're actually fun, entertaining, escapist games at the moment, and they're just freemium in business model.

How do you make a freemium Facebook game successful?  You must reach as many people as possible.  Ok, how do you do that?  You make people enlist their friends into the game.  Ok, how do you do that?

To make people enlist their friends in the game, you must optimize on revenue by way of cross-promotion.  Right, that means optimized towards either spamming friends or paying instead.  This takes game design and turns it on its ear.  In my book, there is no such thing as game design once you are revenue driven in game mechanic itself.

Few people I know want to be a "game maker" in that environment, myself included.  It's no small wonder that so many people I've talked to who have worked on those titles didn't like it.  It's probably because they're not actually designing fun games there, they're making design decisions based on analytics.  Those two concepts are nearly orthogonal.   Starcraft wouldn't exist if it was designed to be freemium, and Farmville wouldn't be the money machine it is if it wasn't run that way.

Anyway, that's all fine and dandy because maybe some people like making obligation games armed with Tableau and Vertica, good for them.  But this leads into the next stage:  creating content for these platforms must be commoditized in order to survive long term.  Not every customer of one game will want to play the next title.  So you must offer more and more choices as time goes on and customers specialize to unique experiences in their mind.

Do the math.   There are 20 friends playing Farmville.  One pays for virtual junk, the others do not -- this is the rate (<5%) that Zynga has disclosed to the SEC.   Now let's assume there's a 50% churn of customers per month.  That is, a 50% chance that one user who pays will not be playing a title in a month.  That's the rate I've found reported on the interwebs for social games.

To stay revenue neutral, you will need to migrate your entire customer base to a new game every 2 months.

And the chances are low that you'll interest the same customers in the next game.  Maybe one Farmville player migrated to Cityville, and one migrated to Castleville.  Over time, you'll need more and more titles to keep the interest and bring in more people to the spam loop.  This is the only way they can achieve revenue optimization across a large audience of unpaid customers in freemium games.

As a result, making these games must become a commodity to the extreme.  They will become cheaper and of less quality because many more of them need to be made to satisfy growth in light of the facts that there is extreme churn and 95% of Zynga's audience pays nothing at all.

If you don't believe me, you can see this in action right now.  Sims Social dominated 2011 for the most part, but EA overall is losing customers while Zynga stays somewhat steady.  EA had nowhere to put customers and their friends once they lost interest in Sims Social.

The business plan of "I'm going to make a facebook game" just won't work for very long.  I worked for a short time at a company that dropped several million taking their top line game franchise and converting it into a Facebook game.  I realized the tough position of Facebook enterprises while I was at that company.. even if that had been successful, you can see what happened with EA.  Overall your revenues decline unless you have somewhere to put customers immediately when they lose interest.

Zynga has been able to escape this fate for years because they had the luck of growing as Facebook grew.  But now that Facebook itself is leveling out (and shrinking here in the US),  Zynga needs to expand production significantly.  Spending less on more titles and hoping that the customer churn goes into itself, rather than someone else's games or to another distraction.

In my next part, I'll take on "Why I didn't go work on mobile games (well, actually I did, but then quit in one month)"