QueTwo's Blog

thouoghts on telecommunications, programming, education and technology

Tag Archives: iOS

Simple Caching Techniques in Adobe AIR

One of the aspects of the Pointillism mobile app that was recently released was that users were expected to use the game while in remote areas.  Remote areas often mean that data service is limited or just plain not available at all, and that can wreck havoc for game participants waiting for data to load.  There are two schools of thought in how to approach this problem.

One is to pre-load all the content that the game would or could ever use.  This means that you either package all the data / images with your app, or you force the user to download this data when they launch the app.  The advantage of this method is that the user can pretty much be completely offline after that point and still get the entire experience of the game.  The disadvantage of this, of course is that then you front-load ALL of your content.  If the user is on EDGE (or worse!), this would mean they would be downloading a LOT more data than they may need to in addition to making your app use more space on the end devices.

The other method is to setup some sort of caching strategy.  This requires the user to be online at least for the initial exploration of each section of your app, but after that, the data is stored on their device.  This can be problemsome if they are offline, of course, but depending on the game, this may not be an issue.  In a cached mode, the user will attempt to read from disc and return that data WHILE making the call to the service in order to pull down the latest data.  To the end user, this becomes transparent.  Updating cached data is also routine as all if you have to do is invalidate the cache to get that bit of new data.

In Pointillism, we worry about two types of data — lists of data (Collections, Arrays, Vectors, etc.), and user-submitted images.  Our goal is to cache both.

Luckily, Caching the images was super easy.  Dan Florio (PolyGeek) wrote a component known as the ImageGate which houses an Image component and a caching mechanism.  Using his component is as simple as substituting the <s:Image> in your MXML or ActionScript with his component, and boom — your images are cached as soon as they are viewed.  I did make a few tweaks to his component and posted it on my space over at Apache.  I substituted the Image component with a BitmapImage for speed, and added a small patch to cache the images in the proper location on iOS devices.

Caching lists of stuff was not much harder.  AIR has a built-in “write to disc” functionality known as SharedObjects.  SharedObjects started as an alternative to cookies in the browser, but within AIR allow us to store variables for long-term storage.  In my case, I choose to store data that came back from the server as a SharedObject every time we got some data back.  This turned out to be a good strategy as it allowed us to show old data immediately  and update it with current data once it came in.  Our data didn’t change /that/ often, so it might update at most every day or so.

One of our data manager’s constructor looked like this :

so = SharedObject.getLocal("org.pointi.cache");
 if (so.data.pointsList == null)
 {
 so.data.pointsList = new Array();
 so.flush();
 }

When we got our data back from our server, we did this :

so.data.pointsList[curHuntID] = event.result as ArrayCollection;
 so.flush();

And finally, when we wanted to read back the data, this is all we had to do (pointsList is the variable that was sent to our calling components):

ro.getPointList(huntID, userID); //call the remote function on the server
if (so.data.pointsList[huntID] != null)
 {
 pointsList = so.data.pointsList[huntID] as ArrayCollection;
 }

Pretty simple, eh?  We did similar setups for all of our data lists, and also implemented some caching for outgoing data (like when the user successfully checked into a location), so we could keep the server in sync with the client.

Adding a GPS-driven map to your Adobe AIR app

Over the next few blog posts I’m going to be writing about some of the cool little features I implemented in a recently released app I worked on — Pointillism.  It is pretty rare that I can talk about an app I’ve released, but the verbiage in this contract allows me to :)

On the admin interface of the app, the customer wanted to be able to add a “point” to the game.  A point is a destination that the end user is looking for in this virtual scavenger hunt.  In order to have the admins be able to visually see what their GPS was returning, we wanted to map the location, as well as the bounding area that they wanted people to be able to check in to.  While our admin interface was pretty basic, the functionality had to be there :

GPS and Map solution on iOS and Android

While most people would instantly reach for Google Maps, we decided to use ESRI’s mapping solution.  They offer a very accurate mapping solution that is consistent across all the platforms in addition to being very flexible   The one thing that Google Maps had a hard time providing us was the ability to draw the fence in a dynamic manner, built with realtime data that came from within our app.  It was important for us to be able to see the current location, and the valid locations where people could check into for that point.  The hardest thing was having the ESRI servers draw the circle (known as a buffer).  ESRI’s mapping platform is available for use FOR FREE, with very limited exceptions.  As a bonus, they have an entire SWC and already pre-built for Flex/AIR.

So, how was it done?  It was actually pretty simple :

    1. Add the SWC from ESRI’s website to your project.
    2. Add their mapping components to your MXML file.  We added the mapping layer and then a graphic layer (where the circle is drawn).  The mapping layer, we pointed to ESRI’s public mapping service.
      <esri:Map id="locMap" left="10" right="10" top="10" bottom="150" level="3" zoomSliderVisible="false"
       logoVisible="false" scaleBarVisible="false" mapNavigationEnabled="false">
       <esri:ArcGISTiledMapServiceLayer
       url="http://server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer"/>
       <esri:GraphicsLayer id="trackingLayer"/>
       </esri:Map>
    3. We added a few components to the MXML’s declaration section.  This included the defination of the “symbol” (the circle itself), and the Geometry Service (the thing that figured out how to draw the circle in the correct place).
       <fx:Declarations>
       <esri:SimpleFillSymbol id="sfs" color="0xFF0000" alpha="0.5">
       <esri:SimpleLineSymbol color="0x000000"/>
       </esri:SimpleFillSymbol>
       <esri:GeometryService id="myGeometryService"
       url="http://tasks.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer"/>
       </fx:Declarations>
    4. Next, we had to write some code to update the map and draw the circle in the correct place.  This involves a few steps, including taking the GPS coordinates from our GPS device, and creating a new “MapPoint” which holds those coordinates.  A MapPoint is exactly that, a single point on the map.  The thing about ESRI’s service is that it knows a LOT of different map coordinate systems — so you need to make sure you choose one that makes sense.  In our case, our GPS is returning back data in WebMercator format (lat/lon) other known as Spatial Reference number 4326, so that is what we are going to use to project that point to center our map.  Finally, we will ask the Geometry Service to return a “buffer” of a series of points that represents a circle x feet around the center of our map.   When the buffer is returned from the web service, we draw it using our graphic we setup earlier and push it to Graphics Layer that is sitting on top of our map.  This all happens in a matter of seconds.
      protected function gotGPS(event:GeolocationEvent):void
       {
       var mp:MapPoint = new WebMercatorMapPoint(event.longitude, event.latitude);
      updateMapWithFence(mp);
       locMap.scale = 4000; //this is a magic number for the zoom level I wanted.
       locMap.centerAt(mp);
       lastPoint = mp;
       }
      protected function updateMapWithFence(mp:MapPoint):void
       {
       var bufferParameters:BufferParameters = new BufferParameters();
       bufferParameters.geometries = [ mp ];
       bufferParameters.distances = [ checkinDistance.value ];
      bufferParameters.unit = GeometryService.UNIT_FOOT;
       bufferParameters.bufferSpatialReference = new SpatialReference(4326);
       bufferParameters.outSpatialReference = locMap.spatialReference;
      myGeometryService.addEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       myGeometryService.buffer(bufferParameters);
       }
      private function bufferCompleteHandler(event:GeometryServiceEvent):void
       {
       trackingLayer.clear();
       myGeometryService.removeEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       for each (var geometry:Polygon in event.result)
       {
       var graphic:Graphic = new Graphic();
       graphic.geometry = geometry;
       graphic.symbol = sfs;
       trackingLayer.add(graphic);
       }
       }

And that is about it!  cross-platform mapping made pretty easy.  The cool thing about ESRI’s mapping solution is the power behind it.  They offer things such as the typical driving directions all the way though “How far can a user see if they stand on the ground at this point”.   Since the components are native to your AIR app, they are fast and behave like you expect it to, without the mess of having an HTML overlay in your app.

Geolocation issues with AIR on iOS (with a fix!)

The recent “GM” version of iOS 6 (this includes the production version that is now shipping on the iPhone 5) broke one of the APIs that Adobe AIR depends on for Geolocation.

If you have an app that was published with Adobe AIR 2.7 through AIR 3.4, your app will either get rejected if you are just submitting it to the store, or it may be broken if it is already in the store.  This is in addition to some other feature interaction issues that have also cropped up in iOS 6.

The bug is that the Geo Location (GPS) subsystem is never actually called — it fails silently, just as if the user denied the application to use the GPS.

Luckily, there is a work-around, but it does require a re-compile of your application.

  1. Make sure your app is running at least AIR 3.0.   AIR 3.4 is currently in production and it works great with this work-around.  AIR 3.5 is on the roadmap, and addresses some other iOS 6 issues (including the ability to address the full screen).
  2. Update the <name> tag within your descriptor XML document.  Wrap the actual name of your application in an extra <text: xml:lang=”en”> tag.  For example,
    <name>
     <text xml:lang="en">MyGPSApp</text>
    </name>
  3. Update the version number.
  4. Make sure your app is compatible with the new storage requirements in 2.23.  Details are available here : http://blogs.adobe.com/airodynamics/2012/03/05/app-compliance-with-apple-data-storage-guidelines/
  5. Compile, and wait.  Your app’s GPS should now work in iOS 3, 4 and 5.

Hope that helps some people out!  I know it was a big sticking point when I had some apps recently rejected because of this incompatibility.

Creating Mobile Applications from a Photoshop Prototype

Thanks to Dr. Coursaris for this photo.

At the WUD Conference, East Lansing, MI

This past Thursday, I was given the opportunity to present on a really cool, but obscure topic — creating mobile applications from Photoshop prototypes, for the World Usability Day Conference.  Essentially, my job was to show people a workflow that is possible when using Adobe Device Central to create a Photoshop file, that is then turned into a working prototype using Adobe Catalyst, and then programmed using Adobe Flash Builder. 

All in all, the conference was excellent, and I was honored to be on stage after notable presenters from the State of Michigan, Motorola, Nokia and well, even the University of Michigan.  The focus of this year’s conference was on usability, and how it relates with mobile applications and devices, which was a perfect match for the presentation I was doing.

After my quick introduction to the subject, I demoed making a complete application from scratch and deployed it to a series of working phones.  I was able to accomplish this workflow in under a half hour, which was completely amazing for not only myself, but the audience too.  It is really cool to realize that the technologies that I’ve been using as betas for so long have actually matured to the point where I can use them to make real applications.

The session was recorded and hopefully will be posted online soon.  You can view my powerpoint here (I did have to disable the live voting, but I did keep the results for historical purposes), and download ALL the source code that and asset files that we used during the presentation here.  Please keep in mind, that the logos in the demo code are subject to the use standards found here

Thanks to the WUD East Lansing team for inviting me!

Follow

Get every new post delivered to your Inbox.

Join 27 other followers