Missing the point

June 12th, 2013 at 12:01

Once upon a time, back in 7th grade, our class had to split into pairs and do a presentation for French class. The assignment was supposed to be straightforward. And for most people, it was.

We were presented with a typical murder-mystery short story. The story was close to being wrapped up. All that remained was a scene where the prosecution was questioning the accused in court. For the assignment, we would be assuming the roles of the prosecution and accused, and acting out the last scene - in French, of course. The class played the role of the jury, and if the accused had an airtight story, he might be able to get away with it.

Obviously the goal was to practice our French. Obvious to everyone but me, apparently. I knew the backstory. I knew that all of the evidence was circumstantial at best. I knew that the only scenarios that could lead to a conviction would be a confession or getting caught in a lie during testimony.

So, I devised a foolproof plan: refuse to answer any questions on the stand. No evidence, no confession, no conviction. That’s how courts work, right?

I put my plan into action. The prosecution asked me about my whereabouts on the night of the murder. I refused to answer. The judge (the teacher) then informed me that I must answer. Except I hadn’t prepared anything. I had nothing. The trial ended abruptly.

The jury voted me guilty, and I failed the assignment. And I deserved it, but I didn’t think so at the time. I was convinced that the point of the assignment should be to carry out the role-playing aspect of the crafty murderer, and therefore did my best at that. But it was French class, and the point was to get better at French.

I was never very good at French. I wonder why?


The game engine is not what makes modding hard

October 31st, 2012 at 20:34

The delay on this reaction is huge, but someone commented about it on reddit, and it set me off.

In 2011, some fancypants executive from DICE said that the Frostbite engine that they’re using for Battlefield Bad Company (and now Battlefield 3) is too complicated to mod:

Well, as of now, we’re not going to make any modding tools, no. If you look at the Frostbite Engine and how complex it is, it’s going to be very difficult to mod the game. Because of the nature of the setup of levels, the destruction and all those things, it’s quite tricky. So we think it’s going to be too big of a challenge for people to make a mod.

(Source)

While I’m certain that what he’s saying has some truth to it, I can pretty much guarantee that “trickiness” is not going to be a significant impediment for dedicated modders. While engine complexity has definitely caused the number of full conversion mods to drop, and the release time for full conversion mods to become very long, it would be nice to at least try.

Not every mod has to be a full conversion. The BF3 subreddit is full of UI improvements that they’d like to make, and it might be nice to see some of those actually implemented, even if in a proof of concept capacity. And none of those changes would mean having to tinker with the destruction system.

I have no doubt that frostbite is complicated as hell, but then so is CryEngine, id Tech, and Unreal 3, and yet there’s mod tools (or full source code!) available for those.

It’s not complexity that’s killing mod projects outright. It’s not like the devs of these huge mods sit down with their great ideas and then come back a few days later and say “actually guys I don’t understand this engine so we’re not going to do the mod anymore.” The way complexity kills a mod is that it lengthens the amount of time that it takes to implement all the grandiose ideas that modders have. It’s time that kills the modding projects, because members come and go, personal lives intervene, it’s much easier for politics to infect a team of volunteers, etc. So I will concede that it is a factor that lengthens projects, and lengthier projects are obviously more vulnerable to collapse.


Back in 2006 I was part of an ambitious project to TC Doom 3 into a post apocalyptic turn based RPG with destructible and procedurally generated environments, and chance-based combat. Think Fallout meets XCOM. Of course, I can’t take credit for any of the really cool stuff in the tech demo, since I was still in highschool at this point; I made the RPG-like grid inventory. So yeah, Mr. Soderlund, I’m sure your engine is really tough to use, but you’re talking to the community that added environmental destruction from scratch, to Doom 3 in 2006. And it’s arguably better destruction than Frostbite. So I’m pretty sure we can figure out yours.

We managed to get all this done, plus quite a bit of graphic design and sound design, and a bit of original soundtrack composition. Things were going pretty well. In fact, here’s a video of the tech demo: http://www.youtube.com/watch?v=ewN2qBXQKXY

Things didn’t fall apart because Doom 3 is an incredibly complicated engine. Things fell apart because the lead designer and programmer both got jobs, and some unfortunate personal problems interfered because the lead designer broke up with his girlfriend, who was the lead artist.

Shit happens, obviously, but complexity is really not the thing that’s holding mods back.


Streamroller Development Resumes

September 5th, 2012 at 19:26

Seeing as how Streamroller is a hobby project, it’s an unfortunate reality that months can pass before I have the time or motivation to pick up the project again. But, that time has come, and after a fairly short haitus of three months, Streamroller is due for another burst of activity.

In my last post, I said I would talk about some server-side improvements. Well, here they are:

  1. An “available as” field on every track.

    Streamroller supports transcoding. Right now it’s minimal, but I humbly maintain that it is superbly frameworked to offer support for many input and output media types. There are currently two transcoders implemented (because they’re the only two I need):

    1. FLAC to Vorbis
    2. FLAC to MP3

    The way that the transcoders are currently tuned means that Vorbis will be picked every time. It’s a superior format in terms of quality and compatibility with FLAC, but inferior in terms of device compatibility.

    Chrome, Firefox, and Android support Vorbis. iOS, Safari, and IE do not - but they do support MP3. It’s tough for the server to reliably guess what type of encoding to use, because it has a limited amount of information available to it. It can make a guess at best. And if it guesses wrong, then you’re not going to hear anything.

    So instead of making the decision on the server, I’ll make it on the client. The client has better knowledge of what its cabilities are. In the worst case, the user knows when they aren’t hearing anything, so we can allow the client to manually pick their format.

    In order to do that, the client has to be able to know what formats are available for every track. So I’ve injected that information into the JSON that I give the clients! Simple as that.

  2. The ability to request a certain format.

    Naturally, in order for the track information to actually be useful, I have to allow the client to override any encoding priorities the server may have so that the client can receive the format that it can play. This is done simply enough by just allowing the client to tack on ?supported_mimetypes[]=something to the end of their media requests. The client can provide a static list of mimetypes that it know how to play to every request, leaving the server to decide which format to serve. Best of both worlds!

Those two changes will probably be the last significant changes to the streamroller backend for quite a while. Development moves on to redesigning and rewriting the frontend. Next post: I will talk about the progress that has already been made on adding mobile support to Streamroller.


StreamRoller Continued (Part 1): A beginner’s look at JRuby

May 23rd, 2012 at 00:15

When I first learned that StreamRoller was built on top of JRuby, I considered it a regrettable concession. Putting the project on top of JRuby afforded us the portability that we desired from the project; the ability to use the same bundle and libraries for any platform that supports the JVM. The downside was that JRuby was an offshoot from the main stream, and probably lacked a lot of support and features.  Since then, I have corrected that ignorant opinion.

I’ve identified two weaknesses with JRuby over its native-code ancestor:

  1. There aren’t as many gems available, thanks to a fair amount of gems requiring native code libraries. JRuby can sometimes link against them, but it’s not perfect, and that option is unavailable to us in the case of StreamRoller anyway, since we don’t want to have to ship native libraries.
  2. You have to wait for the JVM startup time, in addition to the normal app startup time.

There’s supposedly tons of advantages that JRuby has over MRI, such as better memory management and better overall performance. However, I’m most impressed with the smoothness of the interface between JRuby and the underlying Java infrastructure, since performance concerns are pretty minor in a tiny project like this. Old news to many, but fun for a beginner to pick up.

JRuby lacks gems, but it does have the ability to seamlessly access any Java library. We’ve been using this functionality in StreamRoller since the very beginning, with Jaudiotagger. This might be a special exception to the external libraries point I made earlier. Taglib-ruby was only just getting started while when StreamRoller was taking off, but Jaudiotagger was featureful and stable.

Last week, I wrapped a quick interface around the Java standard library, and really cleaned up our image resizing code:

https://github.com/l3ib/StreamRoller/commit/7becd47890f1a2b5a6c1de4db5cb9424826e1501

The imagemagick API is nothing to write home about, but I was familiar with it, so I used imagemagick4j to implement my first pass at image resizing. It was rife with problems, so I was really happy about being able to step a little outside of my comfort zone and replace it with a simple wrapper around the Java image manipulation library. I barely know any Java, so I was rather pleased that a total beginner could figure out how to make the JRuby bindings work within a couple of minutes.

Post beta, we will likely be utilizing the JRuby-Java bridge again. I’d like to make it possible to administer the server from a GUI, and since we’ve already got Java available, it might be feasible to hook onto that GUI framework for integration. That’s a long way off, though…

In the next part, I’ll go over the last server-side features I added before declaring StreamRoller server feature-complete for the beta.


StreamRoller Revived

May 22nd, 2012 at 01:27

A year ago, Chrome’s HTML5 <audio> implementation was doing Range requests. IE’s basically didn’t feel like working at all. Firefox supported only ogg/vorbis. Flash was also ill-suited to playing mp3s over normal HTTP, since it wanted to know the length of the file, and would fail if the header wasn’t set in HTTP.

These problems make it really challenging to effectively transcode on the fly straight to the browser. Workarounds were required, sacrifices made. The depressing state of the technology was bad enough to consider the project abandoned.

Today, I’m happy to report that Chrome and IE9 handle streaming mp3s much better. So well that StreamRoller can now officially support them with no server-side or client-side hacks of any kind. Flash isn’t needed anymore! But Firefox still only has vorbis support. So StreamRoller won’t support Firefox. Sorry guys, but we need mp3 for this to work!

After a furious long weekend of hacking, I’m officially reviving the StreamRoller project!

I’ll be posting frequently the summarize the cleanup that I’ve just finished, the work that still needs to be done, and the exciting new features you can expect in the future!


Technology Evaluations

December 4th, 2011 at 17:09

Intro

I built a stupid app this weekend. In doing so, I tried out a bunch of technologies that have been around for a while, but are new to me.

They are:

  • Coffeescript
  • jQuery deferred
  • The twitter API
  • JSONP

Here’s my review of each.

Coffeescript

I don’t really like coffescript. I will never be a fan of languages that are whitespace sensitive. But I still use them. I’m willing to overlook the straight-jacket of whitespace sensitivity as long as the laugage is good enough. Now that I think about it, both haml and sass (also used this weekend) are whitespace sensitive, and also compile to a base language before being transmitted. Their benefits are worth it, but they’re not on trial today.

Context

I like the initial idea of the @ operator in Coffeescript. It’s a shorthand for “this." And you’d better remember that. It can’t be used like you would use it in ruby. I was stuck for longer than I’d like to admit wondering why my assignment to an instance variable wasn’t working in one of my callbacks. Culrpit: @ did not reference my class, but the context that I was currently in. From a javascript perspective, that behavior is obvious. From the object-oriented, ruby-like context that Coffeescript was trying to pretend I’m working in, it was baffling. I had to fall back to the that = this idiom in order to get my callbacks to close over the object I really wanted to operate on.

Documentation

The documentation is pretty abysmal too. You’re lucky if you get more than a couple of examples. The tutorial doesn’t even show how to make multi-line functions! Nor does it demonstrate how to use an in-line function for a function parameter that is not the final parameter. The whitespace-sensitivity can make it a bit confusing.

Only from knowing javascript did I know that I would have to call a co-member function with @ or this. Somehow in their discussion of classes in both the tutorial and the book, they managed to avoid calling a method from another method. I’m not asking for a python style formal grammar documentation, but there’s a lot to be desired here.

Globals

Coffeescript also makes it really hard to write library files. They’ve gone from javascript’s implicit globals to the complete opposite, which is not allowing any globals at all. This is an improvement in many ways, but means you have to fall back to raw javascript or settle with using the window global if you want to actually expose your code to something outside the current file.

I had my code set up with separate responsibilities: a library that interacts with the twitter API, and a UI frontend. These files can’t talk to each other at all without globals. It’s a shame that the architecture of Javascript has forced that behaviour, but coffeescript goes and makes it worse. I had to use the following code to create a global and then drop my class into it:

`TweetMax = {}`
class TweetMax.Twitterer

If it’s not clear, the backticks drop me down to raw javascript. Thankfully I can at least do this, but this is like having to drop down from C to assembly if C didn’t have the extern keyword.

jQuery Deferred

The majority of my interaction with jQuery Deferred objects was just to use the bindings off of ajax actions. However, I did write a few custom ones of my own. It’s a great API, and is probably a pattern that I’ll wonder how I ever lived without.

The one deficiency that I noted, is that you can’t reset the status of a deferred object. Once a deferred object is resolved, it can’t ever be re-resolved. This meant that I had to pull in a proper eventing framework instead of just re-resolving the deferred that I was supplying to the clients of the library. Not a big deal, and for many uses cases, that’s probably a deliberate feature. But for me, it was a deficiency.

Twitter API

The twitter API itself is nice. It’s available in a variety of formats, and is quite sensible about urls, parameterization, and paging.

It all falls down in the service. Throughout the weekend I was constantly plagued with 502s (Service Unavailable). At first, I thought I might be getting rate-limited, since my app does require quite a few sequential requests. After a bit of googling and head-scratching, I discovered that the API service is simply flaky and goes down all the time. 502s aren’t really an exceptional failure with the Twitter API. It means “try again in a few seconds”. When working with the twitter API, you absolutely must write code that anticipates that the servers will be constantly failing to fulfill your requests. Which is okay in theory, but leads us into the next technology…

JSONP

JSONP is a way to get around the cross-domain restriction on XHR requests. This means my clients can make requests directly to the Twitter API without having to filter through my server first. The specific implementation of the format isn’t entirely important, but how it interacts with the browser is. The browser loads the response from a JSONP request the same way it loads any other resource. Basically, to do a JSONP request, you add a script tag to the DOM, the browser fulfills a request, and a function gets called when it all works. What happens when it doesn’t work? Nothing. The only way to figure out that request failed is to set a timeout in Javascript that fires if your function doesn’t get called within an arbitrary time limit. Ouch. So that twitter API that is constantly failing? Doesn’t play very nice with JSONP. Even with the timeout, you don’t get the luxury of knowing what exactly went wrong. So I have to jump through more hoops to tell if I’ve been rate-limited, or if Twitter is just screwing up again.

Conclusion

  • Coffescript: I’ll probably avoid it in the future. You know what you’re signing up for with javascript. Writing “function(” everywhere is annoying for some people, but not for me.
  • jQuery Deferred: A+++++ would use again
  • Twitter API: Hopefully I won’t have any bad ideas that involve the Twitter API again. Of course, my gripes with it aren’t the fundamental parts of the API, just the reliability of delivery. So if they fix that up, I’d be pleased.
  • JSONP: Cool technology, but the inability to detect failures really snuck up on me and derailed this project. Now that I know about that problem, I’ll have to carefully consider what I want to use it for, in the future.

vim, ijkl

January 24th, 2011 at 19:54

My coworkers finally managed to convince me to switch to vim. What a chore. I basically spent an entire weekend customizing someone else’s .vim directory to my tastes, which was mercifully easier than tracking down the plugins and configuring them myself, but it was a frustrating bit of changing my vimrc from within a vim that is not quite configured the way I want it.

It’s a real shame that the vim authors grew up in an era before video games. In my humble opinion ijkl (used in the same way as wasd) is a vastly superior directional scheme to hjkl. It’s directionally intuitive, unlike the linear hjkl, which relies on memorization. It also keeps your index finger on j, where it belongs on the home row. I use h for insert. It’s not as crazy as it sounds. How often do you repeat ‘go left’? Sometimes. How often do you repeat ‘insert’? Never. Because now you’re in insert mode. Stretching to h for repeated go-lefts became uncomfortable, and sometimes I would unconciously move my whole hand to start on h, which would offset my right handed typing and I’d have to find my place on the home row again after moving around.

So far I haven’t run into much trouble. At least vim has the configurability to change this sort of stuff, even if it does take me an entire weekend to get everything sorted out. Thanks go to Deewiant for this comment, which helped me get my bindings consitent.

Here’s the rebindings, for anyone who wants to join my little cult:

noremap i k
noremap I K
noremap <C-w>i <C-w>k
noremap <C-w>I <C-w>K
noremap <C-w><C-i> <C-w><C-k>


noremap j h
noremap J H
noremap <C-w>j <C-w>h
noremap <C-w>J <C-w>H
noremap <C-w><C-j> <C-w><C-h>

noremap k j
noremap K J
noremap <C-w>k <C-w>j
noremap <C-w>K <C-w>J
noremap <C-w><C-k> <C-w><C-k>

noremap h i


Publishers! :(

January 14th, 2011 at 19:08

Availability of “Starship Troopers” by Heinlein:
1. Kindle Store: 0
2. Google search for “starship troopers mobi”: First Link

You lost a sale!


GOMTV Video Ripper

January 8th, 2011 at 12:50

Okay, so it’s done. As done as it’s going to be, anyway. I really can’t be bothered to spend time beautifying a dirty hack of a web scraper. You’ll have to know about ruby and have a gomtv.net login with a season ticket in order for it to work.

Here it is:

https://github.com/Lugghawk/SCScraper

Enjoy.


New Year, New Stuff

January 4th, 2011 at 19:06

I finally got around to some stuff I’ve been meaning to do for a while this weekend/holiday.

  1. With the help of the rest of the l3ibs, l3ib.org and all its corresponding services have been migrated from a dedicated rusty old celeron to a shiny new linode. I like the idea of virtualized machines, since in theory it reduces the impact of a hardware failure, since the virtual machine can be migrated to working hardware with little to no noticeable effect. We’ll see how that turns out.

  2. In addition to the migration of services, I put together a short list of security priorities for the new server. Embarrassingly, the old one wasn’t even running a firewall. The new server feels much safer with the addition of a strict firewall and some rootkit mitigation, but there’s still more to be done. In depth auditing of privilege escalation is possible via sudo, which has an option to launch subprocesses in a virtual terminal and log everything in a way that can be played back like a video. I’ll talk about this and more if I ever get around to setting it up.

  3. As part of the server migration, and part of something I’ve wanted to do for a while now, I’ve migrated to tumblr from wordpress. My wordpress was constantly being hit with comment spam, and I was reluctant to spend time updating a platform I barely ever posted with. Tumblr will let me have a more hands off approach.

    Tumblr’s templating engine is easier to figure out than wordpress’s and I managed to get a working theme ported over in an hour or two. It’s feature barren, but I never intend to use a lot of the features that tumblr provides anyway.

    I like tumblr’s custom domain solution. It was easy to set up and works nicely.

  4. With a new season of GOMTV starting up again, I took it upon myself to see if I could get my video scraper working. The goal is to be able to grab all of the videos without much human intervention. It was possible to get all of the videos before by just taking them out of Flash’s temp directory, but it was a cumbersome manual process, and prone to spoilers. If I can get this to work, it’ll provide a technique for archival, as well as a way to much more easily stream the content to my PS3.

    Turns out GOM has modified their site between seasons again, which meant that about half of my previous reverse engineering work had to be redone. Fortunately they didn’t change the fundamental way of authenticating that you have a season ticket and downloading the video. HTML scraping just had to be done in a different way, and I’m guessing this problem will repeat itself because will continue as GOM improves their site as time progresses.

    I sort of have a script working. It can fetch some videos, but I’m running into frustrating-to-debug corner cases.

    I will post about progress as it happens.

1 2 3 4 5 Next