QueTwo's Blog

thouoghts on telecommunications, programming, education and technology

Tag Archives: Android

Simple Caching Techniques in Adobe AIR

One of the aspects of the Pointillism mobile app that was recently released was that users were expected to use the game while in remote areas.  Remote areas often mean that data service is limited or just plain not available at all, and that can wreck havoc for game participants waiting for data to load.  There are two schools of thought in how to approach this problem.

One is to pre-load all the content that the game would or could ever use.  This means that you either package all the data / images with your app, or you force the user to download this data when they launch the app.  The advantage of this method is that the user can pretty much be completely offline after that point and still get the entire experience of the game.  The disadvantage of this, of course is that then you front-load ALL of your content.  If the user is on EDGE (or worse!), this would mean they would be downloading a LOT more data than they may need to in addition to making your app use more space on the end devices.

The other method is to setup some sort of caching strategy.  This requires the user to be online at least for the initial exploration of each section of your app, but after that, the data is stored on their device.  This can be problemsome if they are offline, of course, but depending on the game, this may not be an issue.  In a cached mode, the user will attempt to read from disc and return that data WHILE making the call to the service in order to pull down the latest data.  To the end user, this becomes transparent.  Updating cached data is also routine as all if you have to do is invalidate the cache to get that bit of new data.

In Pointillism, we worry about two types of data — lists of data (Collections, Arrays, Vectors, etc.), and user-submitted images.  Our goal is to cache both.

Luckily, Caching the images was super easy.  Dan Florio (PolyGeek) wrote a component known as the ImageGate which houses an Image component and a caching mechanism.  Using his component is as simple as substituting the <s:Image> in your MXML or ActionScript with his component, and boom — your images are cached as soon as they are viewed.  I did make a few tweaks to his component and posted it on my space over at Apache.  I substituted the Image component with a BitmapImage for speed, and added a small patch to cache the images in the proper location on iOS devices.

Caching lists of stuff was not much harder.  AIR has a built-in “write to disc” functionality known as SharedObjects.  SharedObjects started as an alternative to cookies in the browser, but within AIR allow us to store variables for long-term storage.  In my case, I choose to store data that came back from the server as a SharedObject every time we got some data back.  This turned out to be a good strategy as it allowed us to show old data immediately  and update it with current data once it came in.  Our data didn’t change /that/ often, so it might update at most every day or so.

One of our data manager’s constructor looked like this :

so = SharedObject.getLocal("org.pointi.cache");
 if (so.data.pointsList == null)
 {
 so.data.pointsList = new Array();
 so.flush();
 }

When we got our data back from our server, we did this :

so.data.pointsList[curHuntID] = event.result as ArrayCollection;
 so.flush();

And finally, when we wanted to read back the data, this is all we had to do (pointsList is the variable that was sent to our calling components):

ro.getPointList(huntID, userID); //call the remote function on the server
if (so.data.pointsList[huntID] != null)
 {
 pointsList = so.data.pointsList[huntID] as ArrayCollection;
 }

Pretty simple, eh?  We did similar setups for all of our data lists, and also implemented some caching for outgoing data (like when the user successfully checked into a location), so we could keep the server in sync with the client.

Adding a GPS-driven map to your Adobe AIR app

Over the next few blog posts I’m going to be writing about some of the cool little features I implemented in a recently released app I worked on — Pointillism.  It is pretty rare that I can talk about an app I’ve released, but the verbiage in this contract allows me to :)

On the admin interface of the app, the customer wanted to be able to add a “point” to the game.  A point is a destination that the end user is looking for in this virtual scavenger hunt.  In order to have the admins be able to visually see what their GPS was returning, we wanted to map the location, as well as the bounding area that they wanted people to be able to check in to.  While our admin interface was pretty basic, the functionality had to be there :

GPS and Map solution on iOS and Android

While most people would instantly reach for Google Maps, we decided to use ESRI’s mapping solution.  They offer a very accurate mapping solution that is consistent across all the platforms in addition to being very flexible   The one thing that Google Maps had a hard time providing us was the ability to draw the fence in a dynamic manner, built with realtime data that came from within our app.  It was important for us to be able to see the current location, and the valid locations where people could check into for that point.  The hardest thing was having the ESRI servers draw the circle (known as a buffer).  ESRI’s mapping platform is available for use FOR FREE, with very limited exceptions.  As a bonus, they have an entire SWC and already pre-built for Flex/AIR.

So, how was it done?  It was actually pretty simple :

    1. Add the SWC from ESRI’s website to your project.
    2. Add their mapping components to your MXML file.  We added the mapping layer and then a graphic layer (where the circle is drawn).  The mapping layer, we pointed to ESRI’s public mapping service.
      <esri:Map id="locMap" left="10" right="10" top="10" bottom="150" level="3" zoomSliderVisible="false"
       logoVisible="false" scaleBarVisible="false" mapNavigationEnabled="false">
       <esri:ArcGISTiledMapServiceLayer
       url="http://server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer"/>
       <esri:GraphicsLayer id="trackingLayer"/>
       </esri:Map>
    3. We added a few components to the MXML’s declaration section.  This included the defination of the “symbol” (the circle itself), and the Geometry Service (the thing that figured out how to draw the circle in the correct place).
       <fx:Declarations>
       <esri:SimpleFillSymbol id="sfs" color="0xFF0000" alpha="0.5">
       <esri:SimpleLineSymbol color="0x000000"/>
       </esri:SimpleFillSymbol>
       <esri:GeometryService id="myGeometryService"
       url="http://tasks.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer"/>
       </fx:Declarations>
    4. Next, we had to write some code to update the map and draw the circle in the correct place.  This involves a few steps, including taking the GPS coordinates from our GPS device, and creating a new “MapPoint” which holds those coordinates.  A MapPoint is exactly that, a single point on the map.  The thing about ESRI’s service is that it knows a LOT of different map coordinate systems — so you need to make sure you choose one that makes sense.  In our case, our GPS is returning back data in WebMercator format (lat/lon) other known as Spatial Reference number 4326, so that is what we are going to use to project that point to center our map.  Finally, we will ask the Geometry Service to return a “buffer” of a series of points that represents a circle x feet around the center of our map.   When the buffer is returned from the web service, we draw it using our graphic we setup earlier and push it to Graphics Layer that is sitting on top of our map.  This all happens in a matter of seconds.
      protected function gotGPS(event:GeolocationEvent):void
       {
       var mp:MapPoint = new WebMercatorMapPoint(event.longitude, event.latitude);
      updateMapWithFence(mp);
       locMap.scale = 4000; //this is a magic number for the zoom level I wanted.
       locMap.centerAt(mp);
       lastPoint = mp;
       }
      protected function updateMapWithFence(mp:MapPoint):void
       {
       var bufferParameters:BufferParameters = new BufferParameters();
       bufferParameters.geometries = [ mp ];
       bufferParameters.distances = [ checkinDistance.value ];
      bufferParameters.unit = GeometryService.UNIT_FOOT;
       bufferParameters.bufferSpatialReference = new SpatialReference(4326);
       bufferParameters.outSpatialReference = locMap.spatialReference;
      myGeometryService.addEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       myGeometryService.buffer(bufferParameters);
       }
      private function bufferCompleteHandler(event:GeometryServiceEvent):void
       {
       trackingLayer.clear();
       myGeometryService.removeEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       for each (var geometry:Polygon in event.result)
       {
       var graphic:Graphic = new Graphic();
       graphic.geometry = geometry;
       graphic.symbol = sfs;
       trackingLayer.add(graphic);
       }
       }

And that is about it!  cross-platform mapping made pretty easy.  The cool thing about ESRI’s mapping solution is the power behind it.  They offer things such as the typical driving directions all the way though “How far can a user see if they stand on the ground at this point”.   Since the components are native to your AIR app, they are fast and behave like you expect it to, without the mess of having an HTML overlay in your app.

Using AIR Native Extensions for Desktop and Mobile

During Wednesday’s meeting of the Michigan ActionScript User Group, we covered what AIR Native Extensions are, where to find the best ones, and how to actually use them.  Includes demos from both Desktop AIR and Mobile AIR projects.

The two locations to find some of the more popular ANEs are :

I’m Speaking at Adobe MAX!

This year I was lucky to be selected as one of the speakers at Adobe MAX 2011!  I will have a session that will talk about integrating various hardware products with Adobe Flash, Flex and AIR.  Most of my talk will revolve around using the Microsoft Kinect and Arduino based (and other AVR) projects as inputs and outputs from the Flash/Flex/AIR stack. 

If you have been following me lately on Twitter, you will see me talking about some projects that I’ve been working on, including a Kinect version of Space Invaders, and a BikePOV.  Both of these projects will be shown during my talk (in addition to others!)  The Kinect is such a cool input device that I think it hampered only by the developers working with it (the situation with drivers, required libraries, dependencies and lack of documentation makes it REAL hard for non-developers to do anything with them).   The Arduino allows hobbyists to use their basic electronics skills to build very complex electronic gadgets and interact with them using a computer.  These are all things that required EE degrees when I was a kid, so it’s super cool to see that technology has progressed to the point where you can build this stuff quickly and easily.

Make sure to sign up for the session!  It is on Tuesday from 1 – 2pm!

Passing data back and forth to a View in a Mobile AIR application

On a recent AIR for Android app I was working on, I had the need to send data back from the current view to the my controller.  The app was really simple, but the ability to send back the data to my data controller changed it from a 4 hour app to an entire day project.  It turns out, when you design your app around the ViewNavigatorApplication or the TabbedViewNavigatorApplication, they make it really hard to get a reference to the instantiated View created by the navigator.push() function.  In fact, since the navigator.activeView casts everything as a View (and it does it on the next frame, so you can’t get that reference anyway), rather than your custom class, you can’t call any methods in that class, nor can you attach any custom events.

This makes it very difficult if you want your controller (or in my case, the main application) to push or pull any data to the View after it has been created.  I didn’t want to re-create the functionality of the ViewNavigatorApplication by hand — they did lots of really nice stuff in there, but the inflexibility seemed kind of odd.

Anyway, I remembered that you could bubble events from the view and have the main application be the event listener to catch it.  While it does have the added advantage of not directly creating a reference to the View (preventing it from being garbage collected), it feels really dirty in my mind.  I guess I’ll get over it.

Injecting data to the View from the controller is a bit more trickey.  What I ended up doing was creating a new class that extends EventDispatcher, and adding getters and setters that dispatch their own events.  I would then package that custom class in the data property and send it along in the push function.  I can then have that View listen to events off that class, as the class dispatches them with every change.  Again, this won’t cause any references to the View (which is what the SDK was trying to avoid), but I can still get my data there.

Connecting your Flex application to BlazeDS or LiveCycle DS

If you have ever attended one of my presentations on BlazeDS, LiveCycle DS, or Flex/ColdFusion, you have heard me talk about how bad the data-connectivity wizards are in ALL of the IDEs available for the Flex SDK.  Even the new Flash Builder 4.5 dosen’t address the white-hot mess they call the data connectivity wizards that they started including in 4.0 (Side note:  I love most of the features in Flash Builder — I use it every day, but some of the wizards they included to make life easier really don’t).

Even including the services-config.xml document as a configuration option in your application will often lead you to troubles in the long-run.  This is included for you when you tell Flash Builder that you want to use ColdFusion or Java as your server model.  When you do this, the compiler inspects your services-config.xml configuration document on your server, and builds a table of the channels (endpoints) and destinations that are configured on the server.  You can then call the destination by name in theortags and not have to worry about how your client actually connects back to the server…

… untill you need to move your application…

… or you need connect your AIR application to your server…

… or you have a mobile or television application that needs resources on your server…

… or your server’s configuration changes.

Read more of this post

H.264 Being removed from Google’s Chrome Browser

This afternoon Google announced that they were dropping support for the H.264 codec in future releases of their browser, Chrome.  If you want to read more about the announcement, check it out here

Essentially their argument is that they want to support Open-Source.  And there is no better way to support open-source than to include support for the new codec that they just bought (VP8) and announced would be open-source (WebM).  On paper, it sounds great and the world should cheer for them.  I personally support ALL the browsers including support for WebM so at least there is some consistency across the landscape for a single codec.

However, how Google is introducing WebM into their browser is by removing the support for H.264 codec.  For those of you who don’t know what H.264 is, it is by far the most widely supported video codec today — being directly supported by DVD players, Blu-Ray players, video game systems, and more importantly, enterprise video systems. 

Over the last few years we have seen enterprise video make the shift from tape to MPEG-2, to MPEG-4 (H.264) video for recording, editing and storage.  The great thing about the industry was that as time passed more and more devices were getting support for H.264, so there was no need to re-encode your video to support each device under the sun.  It was starting to become a world where we could publish our video in a single format and be able to push it out to a great audience.  Re-encoding video takes time, storage space and typically reduces the quality of the end product.

What is even more unfortunate is that recently we have seen H.264 decoders get moved to hardware on many devices, allowing them to decode high-definition content and display them in full-screen without hitting the processor.  This is starting to allow developers to focus on making more interactive user interfaces and better looking interfaces without worrying about degrading the quality of the video.  A good example of this is the H.264 decoder that is built into the newer Samsung televisions — you can run HD content, as long as you are running H.264 — otherwise you are lucky to see a 640×480 video clip without taxing the processor.   One of the reasons why HD video on the web sucked so bad for Mac based computers was Apple didn’t allow the Flash Player to access the H.264 hardware accelerator.  Newer releases of the Flash Player now support it and HD video is smooth as silk on the web. 

By Google moving to WebM and abandoning H.264 it forces manufactures to reassess their decision to support H.264 in hardware.  It will also force content producers to re-encode their video in yet another format (can you imagine YouTube re-encoding their content into WebM just to support a Chrome users?), and will make the HTML5 video tag to be even harder to work with.  Content producers will need to produce their content in H.264 for Flash, many WebKit browsers, and IE9, WebM for Chrome, and Ogg Theora for Firefox.  The real tricky part right now is there are virtually no commercial products that can encode in all three required formats to date. 

So, if you are looking to support video on the web — what are you to do?  Right now a majority of the users don’t support HTML5 video — but they do support H.264 through the Flash Player.  iPad/iPhone/Android devices support H.264 directly, and embedded systems like the Samsung HDTVs, Sony PS3, etc. all support H.264 encoded video.  The only reason to encode in a different format today is to support Firefox and Chrome users that don’t have the Flash Player installed (less than 1% of all users?).  The recommendation I’m sure will change over the years, but for right now, you still pretty much have a one-codec-meets-most-demands solution that you can count on.

Creating Mobile Applications from a Photoshop Prototype

Thanks to Dr. Coursaris for this photo.

At the WUD Conference, East Lansing, MI

This past Thursday, I was given the opportunity to present on a really cool, but obscure topic — creating mobile applications from Photoshop prototypes, for the World Usability Day Conference.  Essentially, my job was to show people a workflow that is possible when using Adobe Device Central to create a Photoshop file, that is then turned into a working prototype using Adobe Catalyst, and then programmed using Adobe Flash Builder. 

All in all, the conference was excellent, and I was honored to be on stage after notable presenters from the State of Michigan, Motorola, Nokia and well, even the University of Michigan.  The focus of this year’s conference was on usability, and how it relates with mobile applications and devices, which was a perfect match for the presentation I was doing.

After my quick introduction to the subject, I demoed making a complete application from scratch and deployed it to a series of working phones.  I was able to accomplish this workflow in under a half hour, which was completely amazing for not only myself, but the audience too.  It is really cool to realize that the technologies that I’ve been using as betas for so long have actually matured to the point where I can use them to make real applications.

The session was recorded and hopefully will be posted online soon.  You can view my powerpoint here (I did have to disable the live voting, but I did keep the results for historical purposes), and download ALL the source code that and asset files that we used during the presentation here.  Please keep in mind, that the logos in the demo code are subject to the use standards found here

Thanks to the WUD East Lansing team for inviting me!

Creating your first Application for TV

One of the major announcements that came out of Adobe MAX was the availability of AIR 2.5, which is the first SDK to support televisions as an output.  While most of you may be scratching your heads as to why this is a big deal, the few of you who have ever attempted to write an application for a STB (set-top-box) or directly for a television know that they are one hard nut to crack.

Generally, up to now if you needed to push an application to a television-connected device, (including DVD players, Blueray players, STBs, or TVs themselves), you either had to learn the vendor’s propriety language, go with the vendor’s interpretation of Java, or just pay them to make the app for you.  Additionally, it has only been a very short while that the manufactures have even given the developers the ability to push apps to these devices (with Samsung really paving the way in the past few months). 

In comes Adobe with their OpenScreen Project, where they are allowing common RIA developers to simply create applications that can be deployed to these television connected devices.  The dream of write-once-publish-anywhere just got extended to another class of devices.  Mind you, these will be high-end devices at first (for example, take a look at Samsung’s TV lineup (filter by Samsung Apps in the features section) to get an idea of what you will be targeting. Read more of this post

Adobe MAX write-up

I’ve just come back from this year’s Adobe MAX conference, and oh, boy was it a whirlwind! I fell that Adobe outdid themselves this year and set the bar much higher.  I guess being in the same location more than one year in a row lets them concentrate on content rather than logistics.

The theme of the conference this year was different from previous years…  In years past it was about products coming out soon, product announcements or beating th drum of certain technologies.  This year there were virtually no product announcements, and no announcements of things coming soon.  It was all about what was out today, and how to use it leverage it going forward.  I know this disappointed a lot of people in the audience, but with Adobe launching most of their products (ColdFusion, Flex, CS5, etc.) just a few months ago, there wasn’t much to talk about.  Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 29 other followers