Monday, June 29, 2015

Building a browser application using React

tl;dr - use Node.js-Browser-App on github to quickly get started making a React based web application built with Node.js

Lately I've been doing a lot of work constructing javascript-based applications that are meant to run in a browser. In some cases an application was meant to be used in a web browser and in other cases as a Chrome App (formerly called a Packaged App). Currently the React toolkit developed by Facebook and Instagram has achieved a certain dominance in web app development and it functions well in both environments.

One of the amazing strengths of React is the size of the ecosystem that has grown up around it. There are a large number of addons, mixins, and other projects that augment its capabilities. The sheer amount of functionality (code) available is staggering. But before I get too far into React I should really talk about Node.js. Node.js is simply a runtime environment for javascript code. Like React Node.js also has acquired a large ecosystem of code that can be executed in the runtime, perhaps most famously the "Node Package Manager" always referred to as npm.

npm helps solve one of the problems javascript projects have suffered from in the past: how to organize the code so that it can be understood by humans (file naming, directory structure, and small encapsulated chunks of functionality) and packaged for efficient delivery to a javascript runtime. One solution is to use npm packages with source code organized following CommonJS conventions. Both Node.js and npm understand CommonJS and can work with it directly but browsers can not. However as in all things Node.js + npm there's a package for that: browserify. Browserify is a toolkit that reads CommonJS files and converts them into javascript that a web browser can execute. Additionally Browserify supports "transforms" that can be applied to CommonJS source files before they are parsed and transform (modify) them in some way if necessary. Also Browserify can generate "source maps" so that later in the browser's debugger the original source files can be debugged rather than the bowserified files. As we've seen Browserify has been packaged so that it's available in the Node.js environment through npm.

Now back to React. React files are regular javascript except they support the use of JSX syntax. JSX is a little like HTML embedded in your javascript. Unfortunately neither javascript runtimes or browserify understand JSX, but there's a package for that. Through npm we have access to reactify, a browserify transform that will convert the JSX syntax into plain old javascript. Once the JSX is converted browserify can bundle up the code for a web browser.

Of course there are many other steps in preparing a web app for delivery to a server. Node.js + npm can do this too using a variety of task / build utilities, I chose to use gulp. Briefly here are some things you can do with gulp: bowserify (including reactify), conditionally affect stream piping (using gulp-if), copy and modify a JSON file (using gulp-json-editor, generate jsdoc documentation, minify output, copy files, and much more.

With Node.js + npm + gulp + browserify + reactify we can create javascript web applications where the code is organized in a manageable fashion by using CommonJS conventions and we get a high performance UI with React. If you think that those are a lot of pieces to put together from scratch you're right, so don't. You can get off to a faster start using the Node.js-Browser-App repository on github. This repository has all the components described above (plus a Flux dispatcher) so you can just copy the repo and start building a web app or Chrome App.

Incidentally, once you've got your project going you'll find a lot of cloud services can work with it. For example Microsoft's(!) Azure can be setup with webhooks from github so that every time your source code changes it will: fetch the updates from the repository, build using Node.js, and deploy the output.

Sunday, December 9, 2012

@font-face and Blogger: Easy Way

You want to use @font-face with Google's Blogger? I'll cover two different ways to do it; the easy way, with limited font choices, and (coming soon) the not as easy way, but you can use any font you have rights to.

A specimen of the font Lobster.
The Lobster font.

The easier solution is to find a font you like at Google Web Fonts. It's easier because Google generates the different types of font files needed for browser compatibility and more importantly hosts the font files for you. All you need to do is find a font you want to use and get the link to its stylesheet that will make the font available to other CSS. The link looks like this:

<link href='http://fonts.googleapis.com/css?family=Lobster' rel='stylesheet' type='text/css'>

Now that you have a link to the CSS file it needs to be included in the blog's template HTML. In the Blogger dashboard choose "Template" from the menu on the left and then click the "Edit HTML" button. A warning message will be displayed, click on "Proceed." Now is probably a good time to copy all of the HTML into a text editor and save it as a backup so it can be restored later if something goes horribly wrong. Now find the <head> element and add the link element right after it so that the HTML looks like this:

<head>
   <link href='http://fonts.googleapis.com/css?family=Lobster' rel='stylesheet' type='text/css'>

Now the font files will be downloaded to the viewer's browser but how do you use them with CSS? Google Web Fonts also shows the contents of the linked CSS file so you can see the properties needed to set it as a style. Styles can be added using Blogger's "Add CSS" feature under "Template" - "Customize" - "Advanced" - "Add CSS". For this example we can create a CSS class like

p.lobster {font-family: 'Lobster', cursive;font-style: normal;font-weight: 400;font-size: 18pt;color: #BF1A2B;}

and you can have text that looks like this (though I don't recommend it) by setting a class of "lobster" on the paragraph:

The lobsters are loose! Run for your lives! Cover yourselves in butter!

Part two of this soon to be two part series will cover the "Not so easy way of using @font-face with Blogger" which involves serving font files using Google's App Engine.

Saturday, November 17, 2012

Mobile web app fail

As an experiment I wanted to see if a web app could be created that would send images from a wi-fi enabled mobile device to a computer. The reason was that the computer could easily serve the web app directly to a user's mobile device and the user could then upload their images to the computer and both operations could be done without needing to download a native app from an application vendor or use a data plan.

The proof-of-concept requirements were fairly broad and simple: serve the necessary files that make up the web app from the computer, allow the user to choose image files, and upload the images to the computer. Serving the files was simple, I created a basic stand-alone server process that could respond to requests for files and also handle the image files that were uploaded to it via XHRs from the device's browser. The user requested the start page by scanning a QR code that launched the browser and navigated to the app's URL. Problems really started with the web browsers available on mobile devices.

The largest problem was the iOS web browser. Prior to recently released iOS 6 the iOS browser didn't allow the user to choose files for upload. The basic HTML "input" element with type "file" was not supported in the browser. It seems hard to believe this was an unintentional oversight on Apple's part. The practical effect was that this forced developers to create native iOS apps rather than allowing them to create a single web app for any mobile device. There will remain a large number of devices running iOS 5 and earlier versions of iOS for years to come. As developers we will be living with a crippled iOS browser for a long time. IE 6 anyone?

The next problem involved displaying thumbnails of multiple images. On my Nexus S phone both the native and Chrome browsers crashed after opening multiple images. Looking at the logs I found that the browsers were running out of memory. I was able to improve the situation considerably, but not entirely fix it, by setting the img element's width and height attributes to the size of the image that actually displayed rather than relying on CSS.

Once that problem was minimized I was able to select images, insert the thumbnails into the DOM and then upload the the files to the computer. Some positives from this experiment: all of the Android browsers had partial support for the File API (notable exception below), data URIs could be used to access the image information in the files, and finally @font-face support so I could use Font Awesome for the images on buttons and menus. Negatives: while the File API's File object is supported Android doesn't have multi-file select functionality for browsers. Each image file had to be chosen one at a time from the Gallery application which became tedious by the second file selection.

I'd like to mention that of the three browsers I used during development only Mozilla's Firefox browser for Android was able to reliably display and upload files. Unfortunately for the proof-of-concept web app it had to run on at least the iOS and Android native browsers. Other browsers like Firefox or Dolphin just don't have a large enough market share to make them an acceptable solution.

Having said that, my experience with the shortcomings of many mobile web browsers has reinforced my belief that Mozilla's mobile browser and Firefox mobile operating system are as important now as Firefox was ten years ago in helping to push innovation forward, especially on mobile devices, and to keep the web open.

Lastly a tool that was indispensable and made development easier and faster was desktop Firefox's Responsive Design View mode. In this mode (available under the "Web Developer" menu) the browser's viewport is resized to emulate various common monitor and mobile device screen dimensions or you can set custom dimensions as needed. Using this I was able to set the browser to the same size as my phone and do most of the development work on my desktop, only using the phone to periodically test functionality. If you need to design a web page that is "responsive" to different size screens or screens that change size, like switching from portrait to landscape then this tool is a must have.

Sunday, October 7, 2012

Set an Android ringtone

Android has a lot of quirks and annoyances, a common one is setting a custom ringtone, then later no ringtone plays when there is an incoming call (the phone is silent). This happens when the audio file is not located in the correct file location or the wrong application is used when choosing an audio file as a ringtone. Later if the phone is plugged into a computer and the USB file system is enabled the custom ringtone becomes unavailable and the OS "forgets" the ringtone. This is how to use and set a custom ringtone from an mp3, ogg, etc. audio file so that the phone won't forget:

  1. Connect your phone to a PC and copy the file to the folder \sdcard\media\audio\ringtones on your phone. You must create any of these folders that don't exist.
  2. Disconnect your phone from the PC.
  3. Select a contact and from the menu choose "Set ringtone". If given the option to choose an application to select the ringtone choose "Media Storage" and then select the "Always" button. Never use any application other than "Media Storage" to choose the audio file.
  4. From the list of available media choose your ringtone (the name that appears depends on the IDv3 tags in the audio file you added). But wait! What if you don't see the file you just added? The media list is populated by the OS scanning periodically for audio files and it can take some time to discover the new file. So you have two options: first is to just wait a while and retry until the file shows up in the list, second you can clear the media cache and force it to rebuild which still means waiting a while. To clear the cache do this: menu - Manage Apps - All - Media Storage - Clear data. Now reboot the phone and wait for Media Storage data to be rebuilt. This can take tens of minutes. Eventually your audio file will show up in the list.

You can also set the phone's default ringtone by selecting System settings - Sound - Phone ringtone in place of step 3. That's it, now your ringtones should always be available even after treating the phone as USB storage.

Sunday, July 22, 2012

When Canon G1X video and Sony Vegas collide

Do you end up with periodic black frames followed by bright ones when editing video from Canon's G1X camera using Sony Vegas Movie Studio 10? If so try re-encoding the camera generated MOV files using Huffyuv (which is lossless) before editing them in Vegas.

Canon's G1X camera is capable of capturing H.264 encoded video and storing it in an MOV file. The video captured is 1920x1080 at 23.976 frames per second (you can read about this not quite 24fps if you're interested).

I had no problem opening the video files in Sony Vegas and working with them in any of the tools. For my initial test I simply stacked up a few short clips directly from the camera and started the render to H.264 process. In playback the first minute or so of the rendered video was fine but eventually a black frame was inserted into the middle of a clip, not at a transition, and bright frames followed it for a second or so before normal brightness slowly returned. Over the course of my five minute video this happened several times.

I decided to avoid using the H.264 encoded camera input by using ffmpeg to convert the camera video to Huffyuv encoded video since this would give me lossless video to experiment with. (And now a little detour getting the Huffyuv codec installed on my Windows 7 system so that it would be available for Vegas.) Once that was done I opened the Huffyuv video using Vegas in the same order that I had done before with the camera video and rendered to H.264. This time the resulting video played back without any defects.

It appears to me that Sony Vegas is not really capable of handling the 23.976 frame rate properly when working with H.264 encoded 1920x1080 input and periodically inserts a black frame during the output render. When the output is then H.264 encoded the black frame affects the compression of following frames, making them brighter.

The summary: Canon G1X video should be converted to a lossless encoding (Huffyuv works fine) before being edited on Sony Vegas 10. Other versions of Vegas may or may not need the same treatment.

Tuesday, February 21, 2012

Synchronizing multiple jQuery ajax calls using the when function

There is a lot of information about jQuery's ajax() function the jqXHR object and using jqXHR as a Deferred object with the when() function, but I couldn't find any examples that illustrated all this functionality working together.

The problem I solved using ajax() and when() is not unique or unusual, I had an unknown number of simultaneous ajax() calls to make and wanted code execution to continue on a single callback function when they had all completed. This is exactly the type of situation that Deferred objects and the when() function were created to handle. If you are used to working in multi-threaded environments you can think of when() as a kind of synchronization object. You can pass multiple jqXHR objects, returned by ajax() functions, to when(). You can chain a then() or done() function to the Deferred object that when() returns and the callback function you pass as an argument to the chained function will be invoked after all of the ajax() calls have completed.

Something I wanted to do to solve my problem but didn't know how to do, or even if it was possible, was to pass an unknown number of jqXHR objects to when() and get back the results for each jqXHR. One of the strange and wonderful things about javascript is that functions are objects and as objects they can, and do, have their own functions! Another peculiarity is that Javascript functions will take any number of arguments, regardless of how the function is originally defined, this matters because it means when() can take any number of Deferred objects as parameters. If you know each of the Deferred objects ahead of time you can simply pass them to when() as parameters (e.g. $.when(deferred1, deferred2)). If you don't know the number of objects you can gather them together in an array and pass it to when() using the apply function. The apply function takes an array of objects and passes them to its owner function as a set of parameters (be warned that the number of parameters a function can accept is going to be limited by the javascript environment, don't go crazy).

Now I knew how to pass an unknown number of jqXHR objects to when(). How do I get the results of the ajax() calls? The when() function returned a Deferred object and I passed its done() function a callback function to invoke when all the ajax() calls were, well, done. I couldn't find any examples of what this callback function should look like when an array of Deferred objects were passed to when(). My initial thought was that an array of result objects would be passed to the callback, but the single argument I defined only had the result of the first ajax() call. After a long time of experimentation, fruitless searching, and head scratching I suddenly remembered the arguments local variable that is available in every function. I realized that my callback function was being passed a result object as a separate parameter for each ajax() call. Sure enough I found that the length of the callback function's arguments matched the number of jqXHR objects passed to when() and that iterating over it I could get a result object for each call.

So finally here is a code snippet illustrating how to pass an array of jqXHR objects to when() and get back the results! It's easier to look at the original Gist or you can try it out in jsfiddle to see it work.

One final technical note: the jqXHR is not really a Deferred object, it implements the Promise interface but for the sake of simplicity I called it a Deferred object.

Tuesday, December 13, 2011

Facebook Graph API tweaking: fields

Want to improve the performance of your Facebook Graph API calls? Try trimming your requests down to just the information you need. During recent optimization and testing of some Facebook Graph API client code my colleague Bob determined just how much time could be saved by tuning the API requests. Typical requests result in the return of a default object, for example this URL

https://graph.facebook.com/6547384565867889

returns this album object:

{
  "id": "6547384565867889",
  "from": {
    "name": "John Doe",
    "id": "9834439837"
  }, 
  "name": "Mobile Uploads", 
  "link": "https://www.facebook.com/album.php?fbid=6547384565867889&id=9834439837&aid=743987489", 
  "cover_photo": "748397943639", 
  "privacy": "custom", 
  "count": 1, 
  "type": "album", 
  "created_time": "2011-11-24T16:04:54+0000", 
  "updated_time": "2011-11-24T16:04:55+0000", 
  "can_upload": false
}

The client application doesn't need most of this information, all it needs are the id, name, photo count, and when it was created. By adding the optional fields parameter with a comma separated list of field names to the request:

https://graph.facebook.com/6547384565867889?fields=id,name,count,created_time

an object with only the requested information is returned (apparently Facebook gives us "type" for free):

{
  "id": "6547384565867889", 
  "name": "Mobile Uploads", 
  "count": 1, 
  "created_time": "2011-11-24T16:04:54+0000", 
  "type": "album"
}

Not only does this result in less information being transmitted to the client, more importantly it results in considerably shorter response times from Facebook. It seems that retrieving this information requires significant lookup effort on Facebook's servers and asking for less information means less rummaging through the datastore for all the bits. Bob found that getting 57 albums containing 2,434 photos from his account using the default request took 90 seconds. After adding the fields parameter with only the fields required it took only 40 seconds, less than half the original time! Of course YMMV based on the network, we also found that eliminating likes and comments had the largest effect in reducing response time. If you are working on an application that gets large amounts of data from Facebook it may be worth the effort to consider what information is being provided and only get what the client needs.