WorldWind with the Compact Framework

http://www.brains-N-brawn.com/cfWorldWind 5/13/2005 casey chesnut


the following explains how this article came about. ... it all begins with me sitting at work minding my own business. when i overhear a coworker explain to my boss about how he should check out this program called WorldWind (WW). the key words i picked up were : NASA, C#, FREE, and SOURCE CODE. that caught my interest so i took a note to check it out later. that night i downloaded it and was absolutely blown away. it was one of the coolest applications i had played with in a long while. plus it had source! so i shut down the installer version and grabbed the code. built both the solutions, moved the data over, and it ran ... way too easy. looked around the code for a bit and saw that it was using Managed DirectX (MDX). i did not know anything about 3D programming, let alone MDX ... but i did know that the next version of the Compact Framework (CF) was getting Managed Direct3D Mobile (MD3DM). then it clicked ... make WorldWind (WW) run on a Pocket PC (PPC). "the world in the palm of your hand" ... if you will. NOTE this is just the 1st of many cheesy jokes. shot off an email to my MVP lead, and Microsoft got me hooked up with a Pocket PC that had the newer OS bits i would need to start developing. also got a hold of an early version of Visual Studio 2005 Beta 2. i seriously did not know anything about DirectX ... so i had to write the /cfMDX article ... to get warmed up. that helped me to understand the basics about Direct3D (D3D), and what would be involved when porting an MDX app to MD3DM.

of course there are ulterior motives. first, i've been looking for an excuse to learn 3D. but i dont play games, nor do i want to write games. instead, i would like to start using the advanced visualization techniques for application development. second, i want to do more location based development (LBS), so i saw this as a way to get more into GIS and such. third, i needed an excuse to write something using v2 of the Compact Framework (CF). and i didnt want it to be some lame little contacts app. fourth, out of pride for the .NET platform. there is a thread on the WorldWind forums about why C# and DirectX was chosen for WW. plus people are bitching about why its not written in OpenGL / C++. and there are also alot of posts asking for ports to other platforms. instead of whining, i decided to just port/rewrite (the basics of) it myself. fifth, i needed a challenge. nobody was asking for it to be ported to a mobile device ... but it seemed like that would up the difficulty to get it to run on a resource constrained device. sixth, WorldWind was getting alot of link juice, and i wanted to leech off of that. by leech, i mean the blood sucking kind. that was more than enough to get me to devote a chunk of my spare time to throw this together. seventh, to give back to the community. WW has very little documentation. and for the people that say code is self documenting, please refer to my upwards raised middle finger. there is a smack load of code, so any sort of docs would have helped. maybe people can use this article to help get jump started a little quicker. eighth, to work on something that i have no clue about. i love to start working on a project and have no clue about the technologies or domain. you just keep pushing along, and then at some point, something clicks into place. my favorite projects are when i really dont know if they can be done ... or not


is an open source application written by NASA, and currently under development at SourceForge. its a virual globe of the earth that displays satellite images and other data. the main portal for the project is at NASA along with forums. and a great quote about their mission : To inspire the next generation of explorers ... as only NASA can. ... that has got to be the best mission statement EVER.  WorldWindCentral also has alot of information including a wiki. plus DotNetRocks had a show where they talk to the developers.

just to be clear, i'm an independent consultant ... so do not represent NASA or Microsoft.

Port or Rewrite

that was the first question i asked myself. i would have preferred to just create a bunch of CF projects and fix the bugs as builds failed. this wasnt really feasible because of the amount of code, the richness of the UI on the desktop, etc... so a full port was out of the question. the next level is to do a limited port. rewrite the main application and just port over small pieces of the desktop code. this is the process i started out with. set up the application shell and just started bringing bits and pieces over. this worked until i quickly pegged the CPU of the device. the reason being is that WW does very little caching of data. it does most of the calculations for what to render over and over. knew that wasn't going to work, so i started reworking the code to better fit a caching model. meaning the calculations would be done up front and cached off, then rendering would only have to draw the current view. so the actual code ends up being a mix of a targeted port and a rewrite to allow for caching.  


even though the code is great, WW is nothing without the data ... so i looked at that next. the install comes with 555 megs of data! 330 megs of textures for the Earth called BlueMarbleTextures. 10 megs of boundary, and 215 megs of placename information. so that data gets you off the ground and still makes the app relatively cool without requiring an internet connection. beyond that, it gets data from a number of different servers for weather info and satellite images (including TerraServer). this ends up being mass quantities of data. i've heard them say terabytes of data for satellite imagery. and to bring down this data requires big pipes, meaning you really need a broadband connection to use it. the current Pocket PCs and US wireless networks arent great at handling transfers of large data in this manner, so i didnt plan on having the application call out to the satellite imaging servers from the beginning. instead, i decided to make this 1st release only work with the data that was provided in the install. this still required a decent amount of work, because the data was made to work on the desktop and not a device. to get around this i had to write a desktop app to preprocess the data and make it device friendly. but there is also a 3rd option between only using local data and calling out to the servers. this involves having a desktop application that could pull down the data and cache it off. WW does this, as well as some community developed apps. V2 of this app would take that approach to use richer data (such as satellite imagery) that has been pulled down and cached on the desktop and then copied over to a storage card of the device.

those were the major decisions. i was going to write it to run on a Pocket PC with an early version of the next WindowsMobile OS. it would be written using C# and the new MD3DM libraries for the Compact Framework. the code would be partially ported / rewritten to support a caching model to improve performance. and the 1st version would only use the local data that was provided during the install of WW. this would make the 1st version provide just the base functionality

Getting WW to build

because there was no documentation i needed to get WW running in debug mode first. to be able to step through the code and see exactly what was going on. the first thing i did was download and install the binary release WW 1.3. then i downloaded and unzipped the source code. next, i moved the \Data\Earth\ directory from the installed directory to the code \bin\Debug\ directory. also, i created a single solution containing all the projects from the WorldWind and WorldWindow solutions. to get that to build i had to change all assembly references to project references. then the single solution would build and i could run WW in debug mode.

On the 1st day, "let there be light"

started out by creating a new application and getting the render loop setup. the dev environment was Visual Studio 2005 and the version was a pre-release of Beta 2. also had the SDK installed for an early version of the Windows Mobile 5.0 OS which provided the new project types. for the base application i used one of the MD3DM samples that comes with VS 2005. this created the device, setup the render loop, and just blacked out the screen. then i looked at the code in \PluginSDK\Camera.cs to get the initial placement of the camera. NOTE i think that all the WW code i ended up looking at was from the \PluginSDK\ project. the world ends up being drawn at the origin of World space, and the camera just changes position around it. this is different than most Direct3D samples which have the camera in a stationary place, and just transform the object in World space. WW is different than everything else i had looked at too because it uses a RH rendering model, while DirectX is LH by default. the main changes i made here was to not read the settings from an XML file. instead i added them as constants to the program for speed. also, i decided to do all the rendering in the main loop, while WW does rendering on a separate thread. finally, i turned on ambient lighting. this just rendered a black screen as empty space.


On the 2nd day, "let the mesh have texture"

now that we had an empty universe, it was time to create earth. WW starts out by creating a spherical mesh and then applying a texture from the BlueMarbleTextures to it. the initial texture is of the entire earth and applied to the entire sphere. specifically, the texture file is land_shallow_topo_2048.dds, which is 1024x2048 and 1 meg large. i went about creating the Mesh 1st. in MD3DM you can create a sphere as a primitive : Mesh.CreateSphere(). this isnt really useful because you cannot apply a texture to it because it lacks normals. so i grabbed the code from \PluginSDK\ImageLayer.createMesh() which creates a sphere that will render in RH space. i changed this code by using less vertices and also saving the Mesh off to a file. NOTE MD3DM does not support .X mesh files, but provides helper classes for a binary .md3dm format. then the program can just load the mesh from file instead of having to do any calculations at all. with the mesh, i could render the earth as a white sphere in the view. the next step was to apply the texture. i started out just trying to load the same texture that WW does, but got a -2147024809 Exception : Value does not fall within the expected range. my 1st assumption was the texture was just too large, so i used the D3D Texture tool to resize it to 512x1024 ... but that failed too. now i assumed it was the file format so then i just converted it to a BMP, and that was able to load. it also loaded in JPG/PNG/GIF formats, but not TIF. i'm not sure why it did not load as DDS, because i can get some other DDS textures to load? granted, i have other DDS files that wont load either ... but i didnt want to waste too much time investigating this. now the app starts up and initializes the earth mesh and texture. then the render method just sets the texture and draws the mesh. which as far as i'm concerned was the coolest 'hello world' application ever.

On the 3rd day, "let the user have controls"

now that i had a simple earth rendering, i wanted to nail down the user controls. this was challenging, because its sometimes hard to even control WW on the desktop. the 1st decision was how much i wanted to use the stylus. the outcome was to downplay the stylus in favor of the directional keypad, so that i could ultimately port this over to a Smartphone as well. this also dictated that it would only use 2 hardware keys. made the left hardware key handle changing what mode the directional keypad was in, and the right key handle the options menu. the option menu maps almost exactly to the menu provided by WW. the directional keypad is the tricky part. it has 3 modes : Spin World, Zoom/Rotate World, and Tilt Camera. it starts out in Spin World mode, so the direction keypad is used to spin the world up/down/left/right. if you hit the left hardware key, then it switches to Zoom/Rotate mode. Zoom uses the up/down direction to zoom in/out respectively. Rotate is done with the left/right directions. the 3rd mode is Tilt camera. this angles the camera up/down/left/right accordingly. wanted to get this feature in but it wont really make it cool until there is terrain (which doesnt happen in this version). those controls are handled just by changing the 3 views : world, view, and projection. the world view never changes and is always Matrix.Identity. the view is where most of the work happens. spinning the world is handled by changing the position of the camera and always having it point to the origin. zoom is also handled by the cameras position. camera tilt and rotation is handled by applying Matrix.RotationYawPitchRoll. projection just sets up the frustum for what should be rendered. to help with these calculations i copied both the MathEngine and Angle classes directly from WW. these ended up being the only classes that i copied directly. the only changes that had to be made were commenting out the use of the DirectX Plane class, because MD3DM does not provide an implementation. i really wish that it would have provided a Plane class, because then i could have ported over WW's class for doing frustum culling (ViewFrustum.cs?). that left 2 other hardware keys. one to switch to Landscape mode, and the other to open the Pocket Calendar. first, i had to disable the switch to Landscape mode because MD3DM does not support it. so now it just captures that change and suppresses it. i disabled the Calendar hardware button because /cfWorldWind (/cfWW) is very memory intensive. the menus still work as expected, the only problem i get is that they are not rendering initially. i have to tap them with the stylus or press one of their hardware buttons to get the proper menu name to display. dont know if this is a bug in my program, the version of the OS, or the version of the runtime? the last control was to add the stylus. all i did was grab the method from WW and this allowed you to tap a position on the earth, and get the view to center on that spot. finally, i added a class to render the positional info (lat, lon, alt) as text, as well as the crosshairs. the cross hairs were trickier than expected because MD3DM does not provide a Line class. my 1st attempt was to try and do it the GDI+ way. just grab the Graphics of the form and call DrawLine() on it. this sort of worked in that the lines were drawn, but the crosshairs flickered. so the moral of the story is, dont mix MD3DM and GDI+. the MD3DM way to do this involved initializing VertexBuffers, and then just rendering those. this is what WW does, but they do it a little lazier by just creating the Vertices and calling DrawUserPrimitives. i could not do this because DrawUserPrimitives is not supported by MD3DM. the other problem i had was getting them to render the correct color. even though i was using PositionColored vertices with a specific color, i could not get the correct color to render without changing the color of the Material on the device first. my guess this is due to a bug in my code ... but i dont know DirectX well enough to see what i'm doing wrong? so now the user can change their view of earth.

On the 4th day, "let there be tiles"

now that you could zoom in, you would quickly find out that the texture gets pixelated. WW handles this by having 4 levels of the BlueMarbleTexture, and applying those to tiles of the earth. so instead of having 1 single sphere mesh of the earth, the mesh is broken up into multiple tiles. then each tile has a texture applied to it. as you zoom in, the tile sizes become more granular and the textures are of a higher resolution. the first step was to rip out the code for creating tile meshes. i got this working on a desktop application first, and had it save those mesh tiles to .md3dm files. this was done to save calculations on the device. then i created a set of tile images that i could test with. these images were just all white and had text on them that specified what row/column they represented. let me back up here. the BlueMarbleTexture tiles are broken up in 4 levels. each level has directories that represent rows of tiles, and each row directory has the actual tile DDS texture files for the columns. so level 0 has 5 rows and 10 columns. level 1 increases in resolution with 10 rows and 20 columns; and so on through 4 levels. now i had test images that would let me see if the proper row/column of tiles was being rendered.


and the rendering was a bit tricky too. WW uses frustum culling to determine which tiles to render. i could not use this class because of the missing Plane class, nor am i advanced enough to implement my own Plane, so i used a simpler technique. all /cfWW does is determine which tile is the view centered on. then it renders the 8 neighbor tiles that surround it. this technique ends up being less accurate, but really fast. generally, about 90% of the screen will be rendered with higher resolution tiles, while the remaining 10% will render the basic texture from day 2 above. the only other calculation done is to figure out the altitude to determine which level the tiles should be rendered from. so as you zoom in, if you pass the threshold from one level to another, then there is a slight delay as the new set of 9 meshes and textures are loaded. if you just spin the world left and right, then it only has to load the 3 new neighbor textures and dump the 3 nodes that went out of scope. once i had that working, i created a desktop app to convert the BlueMarbleTextures on the desktop. it resized them from 512x512 to 256x256 and saved them as BMP files instead of DDS. i tried to do this initially with a mix of DirectX and GDI+ bitmap handling. but the problem this introduced was rendering seams between the tiles. if i used DirectX alone to resize and save, then the seams did not show. once again, dont mix DirectX and GDI+. the DirectX code below is what was used to resize and convert the tiles (posted by cadull in the directx.graphics newsgroup)

Texture t = TextureLoader.FromFile(device, realTexturePath);
// dimensions are scaled by 1/2^n (e.g. n=2 to divide by 4)
int n = 2; //divide by 4
Surface sn = t.GetSurfaceLevel(n);
SurfaceLoader.Save(bmpTexturePath, ImageFileFormat.Bmp, sn);

now i had something like 250 megs of mesh and texture files for the device. went to Sams and bought a 1 gig SD card for only $70. wow, when did they get that cheap? now when the app starts up, it initially loads with the single mesh and single texture as before for a quicker startup time. if you zoom in below 10,000 kilometers then the 1st level of 9 tiles load. as you keep zooming in it loads more and more detailed tiles for each level. the base texture always loads to render the edges of the world, and the tiles overlay it with more detailed resolutions. if i were to continue down this path, the next step would be to start changing the height of particularly points on the mesh tiles to represent terrain (e.g. mountains and valleys) and then overlay satellite images. but i did not continue because i was starting to run into CPU vs RAM tradeoffs on the device. instead i just moved on to the next type of local data.

On the 5th day, "let there be boundaries"

so that takes care of working with the BlueMarbleTextures. the next set of data included Boundaries. WW comes with 5 sets of boundary data for Countries, Canada, Norway, Sweden, and US States. these boundaries are just rendered as line lists above the surface of the earth. there are 2 binary files for each type of boundary : index and data. the index file provides info on how to parse the data file. to read these files you just use the BinaryReader of .NET. the first attempt i made was to read in the boundary data, load that into VertexBuffers, and then render the lines. but this took way too long, and it barely rendered at all. when zoomed out the boundaries just disappeared. as you zoomed in, then they would appear. i think this was because the zoomed out view had alot of lines that had their starting and end points being at the same viewport pixel. WW gets around this by doing a projection of each line into the viewport and seeing if the start and end points are some distance a part. if not, it just moves on to the next point. this allows WW to render less detailed boundaries far away, and more detailed boundaries as you zoom in. i could not do this on the device because that would require too many calculations during render. instead, i was going to only load one set of vertices during init, and just render the same set each time. to come up with that set of vertices i wrote another desktop app to process the files and shrink them down (e.g. 2.5 megs of boundary data down to about 25K). then i used that boundary file on the device instead. this worked great for all the boundaries except for the US. you could see the boundaries zoomed out, as well as when zoomed in. the US boundaries get messed up for some reason, so that some of the state boundaries overlap (e.g. the TX panhandle doesnt look like a panhandle at all).


since i was already working with lines, went ahead and implemented the lat/lon lines, plus the equator and tropics. this involved just using LineLists the same as above. WW had the code to do those calculations, and i just do those during init for different levels of altitude. e.g. one level of lat/lon lines has them all spaced 10 degrees apart, the next is 5, and the most detailed is 2 (WW goes one more level to 1 degree separation). WW also renders labels on the lat/lon lines and positions each label based on the view. this requires the calculation to happen during rendering, so i chose not to render those labels for the sake of speed.

On the 6th day, "let the places have names"

the previous day accounted for boundary data, so i moved on to working with the placename data set. from having problems reading in the 10 megs of boundary data, i knew that i could not read in the 200 megs of placename data. just looking at the file system, i could see that some of the placename sets had a similar tile row/col setup that was used for textures. i didnt want to mess with that anymore so i just deleted those files. getting rid of those took the set down to about 3 megs, and those 3 megs would still provide alot of placenames. the files ended up being really similar to the boundary files above. they were binary and parsed by using the BinaryReader. now init just reads those files, and stores them in memory. as you zoom in, the placenames are rendered accordingly. all i do is a little lat/lon check against the viewrange to figure out what placenames should be rendered for the current view. since the placenames were held in memory, i also implemented the 'Find Place' functionality. so you can search for a place by name, and then if you select it, the view will be rotated to center on that location. the only other changes i made to this regard settings. the desktop stores settings in an XML file. to read that data in i used the new XmlSerializer class for CFv2. first i created the corresponding serialization classed by using XSD.exe. then, i used the XmlSerializer class with an instance of that type to deserialize the XML into objects. this worked great but was rather expensive because the XmlSerializer has to do alot of reflection (it doesnt generate an assembly at runtime like the desktop). to get past this i just used codegen instead. wrote a quick and dirty XSLT file to transform the XML into C# code that would create an array that looked just like the collection of objects that the XmlSerializer would return. this is pretty ugly, but its fast and cheap; so i used this technique a couple more times later on. and speaking of collections ... was telling a coworker about how much i liked VS 2005 and he asked if i had used generics yet. i actually had not because i was somewhat scared of them based on my early C++ experience with templates as a kid. but he guilted me into trying them ... and they kick ass. it only took like 5 minutes to understand them in a little test project, and then about 5 more minutes to convert this existing codebase over. now i just have to retrain myself to not use the ArrayList and Hashtable unless absolutely necessary.

we'll get to the seventh day at the end of the article, but there are more features to add. NOTE the six days reference was just an attempt at humor. i'm not religious. nor do the days represent actual days. this took me about a month to do. i was timeboxed from when i got the device until the MEDC conference happened. i wanted to get this finished before MEDC so that i could show it around there and work on something else afterward. now that the core data sets were being used, i went ahead and moved on to work with some of the add-on data sets including landmarks and flags. i did landmarks 1st because it was smaller. all it does is go through each landmark and render an image at the corresponding spot. i brought the wikipedia URLS over as well, but am not checking the stylus clicks to spawn the URL or provide descriptive text. left this out because i thought it might take too much time each time you clicked the earth just to get it to recenter. flags of the world ended up being exactly the same as landmarks. the only problem with flags is that it was really memory intensive. mitigated this a little by making the images smaller. regardless, i had to resize the images because the width and height of the images were not powers of 2 and varied for each flag image. so all i did was resize them to a size that MD3DM could load and render appropriately.


MapPoint WebService

landmarks and flags were add-ons to WW, albeit boring ones. i needed to spice it up just a bit. something that i had been thinking about was the data sources that WW uses. it gets environmental data from MODIS and satellite images from various sources including TerraServer. my next thought was that it should also get data from the MapPoint Web Service (WS). the vision i had involved 3 different scenarios : Points, Lines, and Maps. MapPoint has the Address service which you can use to get lat/lon for an address. with the lat/lon, then you can render that spot on the Earth either as a Placename or an Add-on like Landmarks and Flags. also, the Address service can return 'areas of interest' around a certain lat/lon. so when you are zoomed in really close on the map, then it could call out to the MapPoint WS and plot nearby gas stations / coffee shops / etc... on the map. they also provide a Route web service for directions. the directions come back with lat/lon coordinates for when you turn a corner, so this would allow you to plot the travel course just like the boundary lines are rendered. finally, you can get a Map tile back as an image, and then render that on one of the tiles, instead of using the satellite imagery. as a proof of concept, i just decided to plot out Placenames for Hooters restaurants


this adds the locations of Hooters restaurants as Placenames in WW. it works in both the desktop version and on the device. first, i went to the Hooters website and copied all the addresses for their US restaurants. then i did some manual labor to work that into an XML format. next, i wrote a desktop application to submit those 360 addresses to the MapPoint WS and get back the lat/lon for each restaurant. then i had the desktop app write that data to the binary Placename format. for the index file, i just used one of the preexisting ones from US city placenames. finally, i extended the Placenames.xml config file to include the new Hooters placename data. to get it to work on the device i just copied the data file over and added the setting to include that DataSet. now you can easily see where the Hooters restaurants are throughout the US. now if somebody would just provide me an XML file with the addresses to all the strip clubs throughout the world ...



this was just begging to be done. i believe i wrote the very first GPS reader for the CF about 2.5 years ago for the /noSink article. the problem is i haven't done anything with GPS since then, and there are a ton of new CF GPS class libraries that have been written since then. so i started trying them out, and none of them would work. ends up that my old Pretec CompactFlash GPS just wasn't cutting it. but i already had a Pharos 360 GPS from the MS Streets and Trips 2005 package, and just headed over to CompUSA to buy a CompactFlash adapter for $50. plugged it in and some of the class libs actually started working. ended up going with one of the older ones from JW Hedgehog. all i did was quickly tie that into the app, and now the app just keeps the lat/lon updated from the GPS, so the display is always centered on your current location. it works but would be alot cooler if i had the satellite images working too.


when you first started reading this article, you probably thought i was crazy. are you kidding me? you want to have WW running on your Pocket PC ... and with Managed Code! well, i think its doing pretty darn good on a 2 year old device that was never intended to run 3D graphics, and is using software emulation. but i cant say that it was all that easy, i definitely had to be conscious of performance. the first battle was with the CPU. WW on the desktop doesnt have to cache much data because todays machines have processing power to spare, and they have video cards to offload the rendering. the PPC device has limited power, nor does it have a graphics processing unit (GPU); so /cfWW could not afford the luxury to crunch the same numbers over and over again. to get past this i had to do most of the calculations up front during the initialization of the program, so that when the program was running it could use all the CPU just to handle control input and render accordingly. this takes away slightly from the user experience, but keeps the framerate up. in general, this runs at about 10 fps. thats low for todays games, but it looks pretty good on a PPC. i do get lag whenever i have to load tiles when switching to a higher resolution. when devices get GPUs then they will be able to use the CPU for doing calculations and the GPU for rendering. the second problem i ran into was limited RAM because of all the caching i was doing. there are times when /cfWW is using all the 32 megs of available RAM on the 64 meg device. to get around this i just cant turn all the features on at once, and i have to unload the cached data when some features are turned off. this also takes away a little from the user experience when they are waiting for a data set to load (and unload), but it keeps more RAM available for rendering and the framerate stays higher. this also meant that i had to be real careful of disposing of resources, particularly Mesh, Texture, and VertexBuffer objects. wrote a quick little app to reflect against the MD3DM assembly to list all the objects that have a Dispose method (listed below). NOTE that Font has a Dispose method, but does not implement IDisposable? finally, i totally ignored a performance feature of MD3DM. it has the cabability to work with fixed point numbers instead of floating point. this should provide a performance increase on small devices without FPU processors. early on i chose to ignore this feature and just use floating points for a couple of reasons. first, i wanted to keep the code as close as possible to WW. second, from testing on the preview device, it did not show any performance improvement between the samples that were implemented in both floating point and fixed point. i would definitely rethink this for a production quality application or for a device with a GPU


this explained how i ported the basic functionality of NASAs WorldWind to run on a PocketPC using MD3DM. its uses all of the installed data including BlueMarbleTextures, Boundaries, and Placenames, it also runs the Add-on for Landmarks and Flags of the World. next, i created my own add-on for Hooters restaurants that runs on both the desktop and the device. finally, it integrated GPS to make it a truly mobile application.

personally, i'm very happy with the outcome because it works better than i initially thought it would. plus i learned quite a bit about WW, MD3DM, and CFv2. since WW is the largest public DirectX codebase that i know of, i think it speaks alot to the power of the MD3DM API that i was able to recreate so much of its functionality in a short amount of time. the only class i missed was the lack of a Plane class. also, i think it speaks to the power of the PocketPC platform in general, because i definitely would not have even attempted this on any other portable device. especially considering i was starting from ground 0 with DirectX and only worked on this for about 100 hours in my spare time over the last month


the source code is listed below. first, i was developing with an early version of the Windows Mobile 5.0 OS, installed on a loaner Pocket PC provided by MS. the other option is to use the device emulator, but its way too slow when it comes to running MD3DM applications. you really need a device to do MD3DM development. i've successfully ran it on the HTC Universal / i-mate JasJar. i'm assuming that it will also run on the K-Jam? i do not know if it will work on the Dell x50v or x51v devices which have graphic accelerators?

also, you need both the source code and the converted data below. the device will also need Compact Framework v2 with the MD3DM bits deployed, which can be done through VS.NET 2005.

finally, here is the Placenames for getting the Hooters restaurants on WorldWind. first, you must create a \Hooters\ directory under the \Data\Earth\Placenames\ directory. then you need to copy the files into that directory. finally, you need to add the following section to the Placenames.xml config file.

<TiledPlacenameSet ShowAtStartup="true">



of course the desktop version of WW implements alot more, including : scripting, terrain, high res satellite imagery, MODIS data, and caching downloaded data. /cfWW is setup to implement all this additional functionality, but i will not think about attempting it until a newer device is available with a faster CPU, more RAM, and preferably a GPU. the second version would probably just work off of data that had been cached on the desktop and copied to a storage card for the device. a third version would then actually pull the data down directly over the internet.

1/10/06 - uploaded the Data files to RapidShare.de. refer to 'Source' section above


on the seventh day, i rested. on the eighth day, i started in on my next project ... later