QueTwo's Blog

thouoghts on telecommunications, programming, education and technology

Tag Archives: Application Devlopment

Creating a one-time login for a mobile AIR Application

One of the aspects of Pointillism was that we wanted to minimize the amount of time that the user needed to worry about logins, passwords and signing up for the service.  This should be a pretty common goal for most mobile applications — the more you force the user to input that type of information into your app (or verify, re-verify, etc), the less chance they will use it.

We decided to base the app around the “ViewNavigatorApplication” model within Flex.  For the rest of the application, it made perfect sense as this type of app could easily be built around “screens” that were stacked as the user moved from one activity to another.  The problem was — if I wanted to force the user to login, I would either have to introduce some sort of “launching” screen that would contain the logic-check to see if the user had logged in prior, or I could not define the “firstView” property of the application tag and have some script in the Application tag decide.

My solution consisted of this — I defined the firstView to go right to the dashboard within the application (so, where a logged in user would go).  I then added a bit of code to the initialize event handler that could intercept the creation of the View and force it to go to the login screen ONLY IF the user had never logged in before.  This allowed the normal operation of launching the app after the user had logged in to go very quickly, yet still force the login in a seamless way.  This also meant that the user wasn’t subjected to multiple awkward transitions as the application decided if they were logged in or not.

<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark"
 initialize="preAppInit()" firstView="org.pointi.views.MainScreen" .....>

 <fx:Script>
 <![CDATA[
import ....
public function preAppInit():void
 {
 var userInfo:CurrentUserManager = new CurrentUserManager();
if (!userInfo.isLoggedIn())
 {
 navigator.defaultPushTransition = null;
 navigator.replaceView(LoginScreen);
 }
 }

You will note that I set the default “push” transition because I wanted it to seem that the application launched right to the login screen, instead of having it flip to the screen (giving the impression that the user could hit the back button to go back to another screen).  Otherwise, the rest should be pretty self-explanatory.

Simple Caching Techniques in Adobe AIR

One of the aspects of the Pointillism mobile app that was recently released was that users were expected to use the game while in remote areas.  Remote areas often mean that data service is limited or just plain not available at all, and that can wreck havoc for game participants waiting for data to load.  There are two schools of thought in how to approach this problem.

One is to pre-load all the content that the game would or could ever use.  This means that you either package all the data / images with your app, or you force the user to download this data when they launch the app.  The advantage of this method is that the user can pretty much be completely offline after that point and still get the entire experience of the game.  The disadvantage of this, of course is that then you front-load ALL of your content.  If the user is on EDGE (or worse!), this would mean they would be downloading a LOT more data than they may need to in addition to making your app use more space on the end devices.

The other method is to setup some sort of caching strategy.  This requires the user to be online at least for the initial exploration of each section of your app, but after that, the data is stored on their device.  This can be problemsome if they are offline, of course, but depending on the game, this may not be an issue.  In a cached mode, the user will attempt to read from disc and return that data WHILE making the call to the service in order to pull down the latest data.  To the end user, this becomes transparent.  Updating cached data is also routine as all if you have to do is invalidate the cache to get that bit of new data.

In Pointillism, we worry about two types of data — lists of data (Collections, Arrays, Vectors, etc.), and user-submitted images.  Our goal is to cache both.

Luckily, Caching the images was super easy.  Dan Florio (PolyGeek) wrote a component known as the ImageGate which houses an Image component and a caching mechanism.  Using his component is as simple as substituting the <s:Image> in your MXML or ActionScript with his component, and boom — your images are cached as soon as they are viewed.  I did make a few tweaks to his component and posted it on my space over at Apache.  I substituted the Image component with a BitmapImage for speed, and added a small patch to cache the images in the proper location on iOS devices.

Caching lists of stuff was not much harder.  AIR has a built-in “write to disc” functionality known as SharedObjects.  SharedObjects started as an alternative to cookies in the browser, but within AIR allow us to store variables for long-term storage.  In my case, I choose to store data that came back from the server as a SharedObject every time we got some data back.  This turned out to be a good strategy as it allowed us to show old data immediately  and update it with current data once it came in.  Our data didn’t change /that/ often, so it might update at most every day or so.

One of our data manager’s constructor looked like this :

so = SharedObject.getLocal("org.pointi.cache");
 if (so.data.pointsList == null)
 {
 so.data.pointsList = new Array();
 so.flush();
 }

When we got our data back from our server, we did this :

so.data.pointsList[curHuntID] = event.result as ArrayCollection;
 so.flush();

And finally, when we wanted to read back the data, this is all we had to do (pointsList is the variable that was sent to our calling components):

ro.getPointList(huntID, userID); //call the remote function on the server
if (so.data.pointsList[huntID] != null)
 {
 pointsList = so.data.pointsList[huntID] as ArrayCollection;
 }

Pretty simple, eh?  We did similar setups for all of our data lists, and also implemented some caching for outgoing data (like when the user successfully checked into a location), so we could keep the server in sync with the client.

Adding a GPS-driven map to your Adobe AIR app

Over the next few blog posts I’m going to be writing about some of the cool little features I implemented in a recently released app I worked on — Pointillism.  It is pretty rare that I can talk about an app I’ve released, but the verbiage in this contract allows me to :)

On the admin interface of the app, the customer wanted to be able to add a “point” to the game.  A point is a destination that the end user is looking for in this virtual scavenger hunt.  In order to have the admins be able to visually see what their GPS was returning, we wanted to map the location, as well as the bounding area that they wanted people to be able to check in to.  While our admin interface was pretty basic, the functionality had to be there :

GPS and Map solution on iOS and Android

While most people would instantly reach for Google Maps, we decided to use ESRI’s mapping solution.  They offer a very accurate mapping solution that is consistent across all the platforms in addition to being very flexible   The one thing that Google Maps had a hard time providing us was the ability to draw the fence in a dynamic manner, built with realtime data that came from within our app.  It was important for us to be able to see the current location, and the valid locations where people could check into for that point.  The hardest thing was having the ESRI servers draw the circle (known as a buffer).  ESRI’s mapping platform is available for use FOR FREE, with very limited exceptions.  As a bonus, they have an entire SWC and already pre-built for Flex/AIR.

So, how was it done?  It was actually pretty simple :

    1. Add the SWC from ESRI’s website to your project.
    2. Add their mapping components to your MXML file.  We added the mapping layer and then a graphic layer (where the circle is drawn).  The mapping layer, we pointed to ESRI’s public mapping service.
      <esri:Map id="locMap" left="10" right="10" top="10" bottom="150" level="3" zoomSliderVisible="false"
       logoVisible="false" scaleBarVisible="false" mapNavigationEnabled="false">
       <esri:ArcGISTiledMapServiceLayer
       url="http://server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer"/>
       <esri:GraphicsLayer id="trackingLayer"/>
       </esri:Map>
    3. We added a few components to the MXML’s declaration section.  This included the defination of the “symbol” (the circle itself), and the Geometry Service (the thing that figured out how to draw the circle in the correct place).
       <fx:Declarations>
       <esri:SimpleFillSymbol id="sfs" color="0xFF0000" alpha="0.5">
       <esri:SimpleLineSymbol color="0x000000"/>
       </esri:SimpleFillSymbol>
       <esri:GeometryService id="myGeometryService"
       url="http://tasks.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer"/>
       </fx:Declarations>
    4. Next, we had to write some code to update the map and draw the circle in the correct place.  This involves a few steps, including taking the GPS coordinates from our GPS device, and creating a new “MapPoint” which holds those coordinates.  A MapPoint is exactly that, a single point on the map.  The thing about ESRI’s service is that it knows a LOT of different map coordinate systems — so you need to make sure you choose one that makes sense.  In our case, our GPS is returning back data in WebMercator format (lat/lon) other known as Spatial Reference number 4326, so that is what we are going to use to project that point to center our map.  Finally, we will ask the Geometry Service to return a “buffer” of a series of points that represents a circle x feet around the center of our map.   When the buffer is returned from the web service, we draw it using our graphic we setup earlier and push it to Graphics Layer that is sitting on top of our map.  This all happens in a matter of seconds.
      protected function gotGPS(event:GeolocationEvent):void
       {
       var mp:MapPoint = new WebMercatorMapPoint(event.longitude, event.latitude);
      updateMapWithFence(mp);
       locMap.scale = 4000; //this is a magic number for the zoom level I wanted.
       locMap.centerAt(mp);
       lastPoint = mp;
       }
      protected function updateMapWithFence(mp:MapPoint):void
       {
       var bufferParameters:BufferParameters = new BufferParameters();
       bufferParameters.geometries = [ mp ];
       bufferParameters.distances = [ checkinDistance.value ];
      bufferParameters.unit = GeometryService.UNIT_FOOT;
       bufferParameters.bufferSpatialReference = new SpatialReference(4326);
       bufferParameters.outSpatialReference = locMap.spatialReference;
      myGeometryService.addEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       myGeometryService.buffer(bufferParameters);
       }
      private function bufferCompleteHandler(event:GeometryServiceEvent):void
       {
       trackingLayer.clear();
       myGeometryService.removeEventListener(GeometryServiceEvent.BUFFER_COMPLETE, bufferCompleteHandler);
       for each (var geometry:Polygon in event.result)
       {
       var graphic:Graphic = new Graphic();
       graphic.geometry = geometry;
       graphic.symbol = sfs;
       trackingLayer.add(graphic);
       }
       }

And that is about it!  cross-platform mapping made pretty easy.  The cool thing about ESRI’s mapping solution is the power behind it.  They offer things such as the typical driving directions all the way though “How far can a user see if they stand on the ground at this point”.   Since the components are native to your AIR app, they are fast and behave like you expect it to, without the mess of having an HTML overlay in your app.

Compiling the Apache Flex SDK with IntelliJ

I’ve only been using IntelliJ for a few weeks now, but I love it.  I see myself using this as my primary IDE for all things Apache Flex as time moves forward.

One question that has been asked quite frequently on the Apache Flex Dev mailing list is “How do I compile the Apache Flex SDK with IntelliJ?”  Well, since a picture is worth a thousand words, a video on the subject must be worth… umm.. (11 minute video, at 15 frames a second, times the value of pi… )  9,900,000 words!

Compiling Apache Flex SDK with IntellIJ

  1. Grab the Requirements :
    1. Java JDK 1.5, 1.6 or 1.7
    2. Adobe Open Source Flex SDK 4.6 (needed for the compiler at the time of writing)
    3. IntelliJ with ANT, Flex and Java plugins
  2. Create a new Project
  3. Create a new Java Module.  Name it anything you wish.
  4. Create a new Flex Module within that last Module.  It must be named “frameworks”
  5. Unzip the contents of the Open-Source Flex SDK into your Java Module EXCEPT the frameworks directory.
  6. Check the frameworks directory from the Apache SVN (https://svn.apache.org/repos/asf/incubator/flex/trunk) .  Make sure it ends up in the frameworks directory.
  7. Load up the ANT tab, and add the /frameworks/build_framework.xml  file.
  8. Hit the “Run” icon to start the compile.
  9. Drink a beer, or take a shower — depending on what the clock says.

After about 7 minutes or so (my computer compiles it all in 422 seconds on average), you should have a successful build, and a custom-compiled SDK!

NOTE:  The reason why we created two modules is so that you can create your own branch (or switch to somebody else’s branch) without having a whole lot of heart-ache.  All you would need to do is go to the framework module and change the branch you are checking out from.  This will allow you to create patches and submit them into JIRA against the current “patches” branch, instead of the trunk.

Using AIR Native Extensions for Desktop and Mobile

During Wednesday’s meeting of the Michigan ActionScript User Group, we covered what AIR Native Extensions are, where to find the best ones, and how to actually use them.  Includes demos from both Desktop AIR and Mobile AIR projects.

The two locations to find some of the more popular ANEs are :

Creating a Windows AIR Native Extension with Eclipse – Part 4

In this final of my 4-part video series, I show you how to import and use the ANE that we created in the last three videos.  We will be using Adobe Flash Builder 4.6 to import the ANE, and we will build a very quick sample application that will use the getTestString and getHelloWorld functions that we wrote in our native DLL written in C.

If you want a copy of all the final projects, you can download them here.  The ZIP file includes the CDT project, the compiled DLL, the ActionScript project, the compiled ANE and the project created in this fourth video.  Enjoy!

Creating a Windows AIR Native Extension with Eclipse – Part 2

In part two of this video series, I go through how to actually program your ANE Windows DLL.  This involves doing some C programming.  Please see part 1 here.

The snippets mentioned in this video are available here :  ANE Snippets Download     You can use these to jump-start your development.

Creating a Windows AIR Native Extension with Eclipse – Part 1

The second I heard about Adobe giving us the ability to create our own extensions to the Flash Platform in AIR 3.0, I was smitten.  It was finally a way that we could add our own features and do the things that were high priorities on our lists, but not on Adobe’s.  I knew I was looking for features that were one-offs (how many people today really need access to the COM ports), but they were forcing me to do all sorts of weird workarounds like launching proxy applications to do seemingly simple tasks.

AIR 3.0 got released a few weeks ago and I’ve jumped in head first into creating some ANEs (AIR Native Extensions).  For those of you who don’t know, ANEs are packaged extensions that contain operating-system specific code (DLLs for Windows, Libraries for MacOS, Java classes for Android and Objective-C for iOS), that allow you to do things that the Flash Player wasn’t able to do. 

Unfortunately, Adobe assumed that if you were developing DLLs for Windows, you were going to be using Visual Studio and nothing more.  This didn’t make a whole lot of sense in my mind as they’ve been leveraging Eclipse for all of their tooling, and Eclipse does offer some great C/C++ addins.  Now, that being said, Visual Studio is by far the more feature-full and hands-down the best editor for enabling these kinds of workflows on Windows.  It is, however, very costly and even though Microsoft offers a free versions, it takes over your computer by installing debug versions of most of Microsoft’s shared libraries making your computer slower and more crash prone.

I wanted to use Eclipse’s CDT addin with the standard GCC tooling that is available on pretty much every operating system.  By using GCC, I was able to make very portable code that with minimal effort was able to compile on all three of the major OSs (Windows, Mac, Linux). Adobe’s documentation was little help in getting this going (even if you were coding in Visual Studio, there is very little guidance on how to get things setup).  I do have to note that with my setup there is one distinct disadvantage — the lack of ability to debug the DLL when it is launched from AIR.  You will have to write your own C/C++ harness to do testing on your code in order to test it.  If you use the Visual Studio tooling, you CAN debug any DLL while it is running (this is why Microsoft replaces the shared libraries on your system to allow that debugging).

I’ve created a four part video series documenting how to get going creating ANEs.  Part 1 covers setting up your environment, including installing CDT, the compiler, and getting Eclipse setup to do your programming.  Part 2 covers actually coding the C/C++ code for your Windows DLL.  Part 3 covers creating your ANE, and packing up all the stuff needed to make it work.  And Part 4 covers how to use your new ANE in an AIR project.

The BikePOV. Adobe AIR + Arduino + Blinking lights on a bike

So, for the past month I have been working on a side project called the BikePOV.  If you have been reading my tweets, I’m sure you’ve picked up on my cursing, explaining and working on making it work. 

This evening I finally got everything working just the right way — and it actually works!

So, first let me explain what is going on.  I took an Arduino prototyping board and designed a circuit around it.  Essentially I took 12 RGB (Red, Green, Blue) LEDS and soldered them onto a circuit board.  I then mounted the circuit board in between the spokes of a bike wheel.  The theory is that when the wheel turns, I can control the LEDs, and make them flash in a pattern that represents letters, patterns or images.  This is called a POV, or Persistance of Vision. 

This idea has been done before — there are pre-made kits that you can buy from a company called AdaFruit.  A company called Monkeyletric also sells a POV kit for about $60 (which is MUCH nicer than my setup, but they only have pre-done patterns). Read more of this post

I’m Speaking at Adobe MAX!

This year I was lucky to be selected as one of the speakers at Adobe MAX 2011!  I will have a session that will talk about integrating various hardware products with Adobe Flash, Flex and AIR.  Most of my talk will revolve around using the Microsoft Kinect and Arduino based (and other AVR) projects as inputs and outputs from the Flash/Flex/AIR stack. 

If you have been following me lately on Twitter, you will see me talking about some projects that I’ve been working on, including a Kinect version of Space Invaders, and a BikePOV.  Both of these projects will be shown during my talk (in addition to others!)  The Kinect is such a cool input device that I think it hampered only by the developers working with it (the situation with drivers, required libraries, dependencies and lack of documentation makes it REAL hard for non-developers to do anything with them).   The Arduino allows hobbyists to use their basic electronics skills to build very complex electronic gadgets and interact with them using a computer.  These are all things that required EE degrees when I was a kid, so it’s super cool to see that technology has progressed to the point where you can build this stuff quickly and easily.

Make sure to sign up for the session!  It is on Tuesday from 1 – 2pm!

Follow

Get every new post delivered to your Inbox.

Join 27 other followers