## Friday, October 23, 2009

### Shout Outs to Some Small Businesses run By Great Guys

We are all in this together. I wanted to take a second to drop some names for everyone. If you need composites work done or molds made at reasonable rates. Call Robert at Mohrbacher Composites he really did a great job on some sick projects. If you have more normal fabrication requirements, the guys at Impact Fab, tell Ross, that the Buzz Labs boys sent you. Ross works wonders with his water jet.

Thanks guys for helping us with my bad ideas!

### Phidgets UltraSonic RangeFinder

The rangefinder is not a Phidgets. They are reselling a LV Max Sonar EZ1 from MaxBotix. You get in the bag the sensor (transceiver) and a small wire bundle. The wires need to be soldered into the proper pins. Just for reference from the pin closest to the mounting hole,

black
red
open
green
white
open
open

wire meanings

black - ground
red +5V
green signal to trip the ultrasonic ping, +5V I think, but I do not use it
white 5V signal

This will cable your phidgets three wire signal cable correctly. Remember this is not a normal sensor, it is one of their analog sensors. So you will need one of their a/d boards or interface boards in their terminology. The event that you will want to hook, the SensorChange event. Everything the system reads a new voltage on the port it fires sensor change. See my post on the 888 interface kit to see how to set up the interface kit.

Here is my way of switching the data so you only need one method for any port change.

protected void SensorChange(object Sender, SensorChangeEventArgs Args)
{
try
{
InstrumentType _type = InstrumentType.notset;
int _index = 0;

switch (Args.Index)
{
case 7:
_type = InstrumentType.ultrasonicRangeFinder;
_index = 4;
break;
case 6:
_type = InstrumentType.ultrasonicRangeFinder;
_index = 3;
break;
case 5:
_type = InstrumentType.ultrasonicRangeFinder;
_index = 2;
break;
case 4:
_type = InstrumentType.ultrasonicRangeFinder;
_index = 1;
break;
case 3:
_type = InstrumentType.ultrasonicRangeFinder;
_index = 0;
break;
case 2:
_type = InstrumentType.pressureTotal;
totalPressure = convertVoltageToUnit(_type, Args.Value, _index);
speed = getPressureSpeed(totalPressure, staticPressure, staticTemperature);
break;
case 1:
_type = InstrumentType.temperature;
staticTemperature = convertVoltageToUnit(_type, Args.Value, _index);
speed = getPressureSpeed(totalPressure, staticPressure, staticTemperature);
break;
case 0:
_type = InstrumentType.pressureStatic;
staticPressure = convertVoltageToUnit(_type, Args.Value, _index);
speed = getPressureSpeed(totalPressure, staticPressure, staticTemperature);
pressureAltitude = getGeoPotentialAltitude(staticPressure / units.kPaToPsf);
break;
}
if (_type == InstrumentType.ultrasonicRangeFinder)
{
rangeFinders[_index] = convertVoltageToUnit(InstrumentType.ultrasonicRangeFinder, Args.Value, _index);
}

}
catch (Exception _exc)
{

throw new Exception(className + " protected void SensorChange( Sender, Args) :: " + _exc.Message + "\n");

}
}

There are some other details in this code. Since I use a bunch of helper methods to calibrate the data coming out. Remember the values (Args.Value) come out as a double so you can write a simple linear interpolation for you that will get your data back into usable units from Voltage. For simplicity's sake, I have the interpolation points in an xml config file that is read in at start time to handle each of the instruments. That way the values can easily be changed for recalibration.

The code I have presented is good simple code that is quick and should work for almost any purpose and uses solid error trapping techniques.

### Do I want to see both sides of your pretty face?

This week I am still waiting for motors. It turns out my motor supplier has been on vacation this week, how dare he? Sorry Dave I just needed to get in the jab. So I had this new idea for how to progress, I was thinking about something someone said to me. They were explaining how someone did not like to fly their great looking soarer because of how much it cost to replace.

Then I sat there looking at the School Girl UAS on the table and thought. ZOMG! It costs like 2-3x as much as the guy's soarer. What the hell am I thinking? I have to make sure that this thing can be showable to other people. It does have to fly, to demonstrate that those Engineering school was not a waste of time. I just want to make sure I do not throw two months of work and a thousand dollars into the ground due to bad luck and clean living. All of my best ideas are inspiration this time, I swear there was no barley soda involved. Ok, maybe some, but it was good stuff. The decision was made to make a new version of the truck so that I could work more on my version of the IMU and control system as well as make a safer platform for developing technologies for the UAS.

Long story short, a new beastie was conceived. Like the sign said, always well conceived, unlike people. Now this new ground guy lets me do lots of experiments with much less of the z-axis problem. The z-axis problem is acceleration due to gravity. Something that drives along the ground can still have great mishaps but it is easier to teach a car to drift a corner than it is to teach a plane to avoid turbulence.

So one of the first things about this idea is how to navigate the buggy. Actually, this is less of an issue than it may seem and allows us to do some much more interesting things. Monitoring the instruments and planning a heading to a goal is mostly done with the code that I have. What does this have to do with my pretty face? Navigation is not as interesting as deciding how to see. Many times navigation is more than careening off in the direction that leads you to the goal the fastest. What if there is a reason to follow a path? How do you find the path?

I sat back and ran through a series of thought experiments on how to execute this. The following questions came to me that needed an answer:
1. If there is a reason to follow a path, then how do I find the path?
2. If I find a path how often do I have to look for the path?
3. Maybe I want the video stream to get back to an operator and still be able to work on the data, how do I do it?
4. How much processing is really necessary to find the important elements of a scene?
5. An element of a scene is important, how do I decide if it is important for driving or for identifying to an operator?
6. This is a lot of processing, how can I reduce it to use lower-powered systems to reduce processing costs?
There is a path and I want to follow it, and still go in my course direction. Let's assume that the path is in the general direction that I want to go and that it is relatively constant in inclination. Basic range finding can keep us from running into objects in the immediate area, and our navigator will have a limited number of states. The point of this is to limit the amount of storage needed for reprocessing what happened in the case of an accident. If the path changes constantly, then basic range finding can keep things going and it reduces the need for stereoscopic depth perception.

As a matter of fact, the machine itself cannot make a lot of use of stereoscopic vision. Not that it is not useful, but that is an operator technology not a machine technology. The machine can use the stereoscopic vision to detect objects that may cause negative performance impacts . A really shiny thing in the way could "flash" and make a very small thing seem very big and that would possibly confuse the system, or a shadow may seem like a hole and cause unnecessary course correction. Coordinating objects in the two-camera system will allow false behaviors to be avoided or at least identified. It could also be used to do simple comparisons... is this blob like that one? and then reduce useful processing by discarding one of the copies.

If we can process the scene in such a way that we can slow the effective frame rate down by not checking for the path as often it is one less object in the scene to be processed. Optical flow algorithms will help with this because you can quickly get velocity estimates for a given object relative to your position. As that changes you can say... same element do not reprocess it. If we identify the objects in the scene and discard the ones that are clearly not as important then we can process many fewer regions of the scene and or pass the processing back to an operator.

This is going to be a multi-part post about basic vision systems and how I think it should be brought together and used in many applications without changing the code. How a few observations about what we want to do and what it really means may make a lot of decisions easier in the design. This code will focus on commodity components and instrumentation and control electronics from Phidgets and SparkFun.

## Tuesday, October 20, 2009

### The Girls Love the Long Ball, But the Boys Love the Spin

Someone once told me that it is not the boom, it is all about the shock wave. I guess that is true, you need to feel the motion of the ocean.

You ask, what does this have to do with gyros? Well, if you cannot figure out how fast it is twisting, there is no way to know how far your ball will go. Yeah, actually a lot, since driving a golf ball is not a momentum problem alone. The dimples hold the boundary layer on and make it go much farther.

Gyroscopes are cool, I have the neat little 300 deg/s board from SparkFun, built on a 3DOF IMU. I soldered on some pins so that it is easier to connect to. Connected the ground to the system ground and connected each of the signal pins to a separate "connector". Really, easy and I am a bit electrically declined.

I then set up an easy pseudo code system to watch the ISensor.DataUpdated method. That lets me see the phidgets.datachanged event in my little API. The delegate that I use for ISensor.DataUpdated feeds my smoothing algorithm. I use the smoothing algorithm to keep out shorts and wonky voltage changes that happen. This would feed the Kalmann filter which I still do not have.

I then feed these data updated events into an ArrayList and do my Runge-Kutta routines on them to integrate the unknown functions. This running integration is how I get an approximation of the position of the gyro. Remember gyros return the rate of change of the gyration, an angular velocity so to speak.

We can do a simple integration to get back to a position.

xi = xdot*dt+xi-1

for constant timestep dt and the previous position, xi-1.

I will put some code together for the next post on this.

## Friday, October 16, 2009

### Phidgets Accelerometers, the Magic of Three-Axes

The three-axis accelerometer is a piezo-electric accelerometer that is about 1" square. It is pretty good and have never seen any issues with drift or orientation issues. It measures each axis in units of g Do not forget to convert to your units, so a reading of 1.2 is actually an acceleration of 38.6 ft/s/s. I just make a little helper function to convert the readings when the phidgets_accelerationchanged method fires.

Hopefully, that will help clean up your code by reducing the risk of double converting units. Even NASA makes this mistake. One of the first things that I set up is a library of unit converstion factors. That way it is less risky if the user wishes to see the measurements in , mks or cgs and my software interanlly uses US Customary slug-ft-lbf. I do not do any conversions in the code. Just read the electrical signals from the transducers and convert them to real units, consistent with the system. Do not try and convert back and forth within the code, it will just be miserable to find.

You convert to any system that is different than your base unit system until you display the data. You can easily set a flag in the display object that shows the data to the user and multiply out the measurements at presentation time via the decoration pattern. To be honest, the Phidgets API is awesome. It makes short work of connecting and managing their instruments, so the code to start an interface kit is not so different from the accelerometer.

do
{
if (acc0.Attached)
{

Console.WriteLine("_acc0 attached");
//accelerometer events
acc0.Attach += phidgets_Attach;
acc0.Detach += phidgets_Detach;
acc0.Error += phidgets_Error;
acc0.AccelerationChange += _acc0_AccelerationChange;
}
else
{
Console.WriteLine("retry : " + retry + " waiting for acc0 attach");

}
retry++;
} while (retry < 10 && !acc0.Attached); #region helperMethods #region Phidgets event handlers ///
/// handle the phidget device discovery events
///

/// /// protected void phidgets_Attach(object Sender, AttachEventArgs Args)
{
try
{
Console.WriteLine(Args.Device.Type + " attached.");
}
catch (Exception _exc)
{
throw new Exception(className + " protected void phidgets_Attach( Sender, Args) :: " + _exc.Message +
"\n");
}
} //phidgets_Attach
///
/// handle the phidget device discovery events
///

/// /// protected void phidgets_Detach(object Sender, DetachEventArgs Args)
{
try
{
Console.WriteLine(Args.Device.Type + " detached.");
}
catch (Exception _exc)
{
throw new Exception(className + " protected void phidgets_Detach( Sender, Args) :: " + _exc.Message +"\n");
}
} //phidgets_Detach
protected void phidgets_Error(object Sender, ErrorEventArgs Args)
{
try
{
Console.WriteLine("phidgets error : " + Args.Code + " " + Args.Description);
}
catch (Exception _exc)
{
throw new Exception(className + " protected void phidgets_Error( Sender, Args) :: " + _exc.Message +
"\n");
}
} //phidgets_Error
///
/// reads the acceleration from the Phidgets accelerometer
///

/// accelerometer object/// essentially an array of three doubles, one for each direction measuredprotected void _acc0_AccelerationChange(object Sender, AccelerationChangeEventArgs Args)
{
try
{
rawAcc[Args.Index] = Args.Acceleration;
}
catch (Exception _exc)
{
Console.WriteLine(
className + " protected void _acc0_AccelerationChange( Sender, Args) :: " + _exc.Message + "\n"
);
}
} //_acc0_AccelerationChange
#endregion

Another helper method that is constantly requested is converting from accelerations to roll and pitch. You can do the trigonometry yourself, but if gravitation is assumed to act in the -Z direction you can work out the basic orientation of the accelerating object. This can be fooled by large or quick orientation changes, but for the most part sampling frequency can fix this. So I would make sure that you do as little as possible that may muddy the event handler system. They are really fast and that is a good thing in this case.

///
/// calculate the euler angles from the local accelerations
///

/// acceleration toward the right wing, g [gravity multiples]/// acceleration toward the nose, g [gravity multiples]/// acceleration toward the ground, g [gravity multiples]/// headingprivate static void accel2euler(double Ax, double Ay, double Az, double Compass, out double[] EulerAngles)
{
EulerAngles = new double[3];
try
{
double g = Math.Sqrt(Ax * Ax + Ay * Ay + Az * Az);
/* Roll */
if (g != 0)
{
//EulerAngles[0]=Math.Atan2(Ay,Az);
EulerAngles[0] = Math.Atan2(Ay/g, -Az/g);
}else
{
EulerAngles[0] = Math.Atan2(Ay / 1, -Az / 1);
}
/* Pitch */
if (g != 0)
{
//EulerAngles[1] = Math.Asin(Ax/-g);
EulerAngles[1] = Math.Atan2(Ax / g, -Az / g);
}
else
{
EulerAngles[1] = Math.Atan2(Ax / 1, -Az / 1);
}
EulerAngles[2] = Compass; /* Yaw */

}
catch (Exception _exc)
{
throw new Exception(className + " public static void accel2euler( , " + Ax.ToString("0.000") + " , " +
Ay.ToString("0.000") + " , " + Az.ToString("0.000") + " , " +
Compass.ToString("0.000") + " ) :: " + _exc.Message + "\n");
}
}

The one thing you will notice is that you cannot get the yaw from the accelerations. That makes sense if you think about it, flat rotation perpendicular to gravity would not be measured. I usually run a compass in the systems too. That makes the 3-1-3 rotation easy to move between body reference frames and global reference frames. I would suggest that you multiply out the cells for the rotations in a separate method each so that you can just multiply them by calling each method in turn with an argument of the last rotation.

This is one of the mor difficult aspects of internet marketing and small businesses. Many others can spend more resrouces to get the word out at any given time. I have tried to approach it from a step-wise perspective. First step was to get the basic website out, without content you have nothing. A buddy of mine said to make sure that you do not make sure that you do not cludder ideas or muddy your message trying to stuff too much on each page. There are plenty of ways to make the search engines pick up your content in systematic ways.

Next, gather up your all of your suppliers or products that you normally use and make a simple page that links out to their official pages, that way your name or website will be linked to theirs. This will not in and of itself make a huge contribution, but if they look up this junk sucks, it will bring up your name. ;) As they say, any advertising where they spell your name correctly, is good advertising.

I am not sure if this next step was the best, but I found some of the low cost tshirt and mouse pad sites and through a bunch of products out there. People in general would not associate technical ideas with a tshirt. However, it gives you a different lane of approach to your content. Make sure that you fill out the description sections of each item and try to make sure your main url shows up in as many places as possible. These link backs help your search engine scores and increase the possibility that someone will click through to your site. Traffic, even bounce traffic is good. You never know whose little brother will see a t-shirt with a logo and pass it onto their brother the director of your next customer's organization.

Another simple and seemingly the most effective plan of attack. Use your open source community. They are mindless minions of habit. Ok, that is a bit harsh. However, Sourceforge, CodePlex and similar sites have huge mostly anonymous traffic, if you can publish any of your source code as a project and link back to your website you will get the word out to a large audience of people
with similar interests. I have found that techies run in circles and that they are inexoribly drawn to cool projects even if they are not in their fields of expertise. Likewise, if you use a product that has a user community, by all means post. Try to be as active as possible and keep posting your url in your signature. That link back thing is the cheapest and simplest advertising you can achieve.

Tweets that get sucked into the ether by retweeters are good too. However, it is not as guaranteed a method because they are simple text messages and can carry less information than the free poster sites. By all means, spam the web with your message. With billions if not trillions of pages, make sure that you get your information out there in as many places as possible.

Blogging is my last idea, but it is not as effective. Just from a time perspective you have to keep putting into it and that makes it more expensive than just chucking out some posters. Posters are good because you can have a graphical message. Even back in the day...
Tamany Hall said they did not care what they wrote, their constituents could not read... They could UNDERSTAND pictures.

## Thursday, October 15, 2009

### How to Connect and Communicate With a Phidgets Interface Kit in C#

I have a bunch of there 8/8/8 kits. They are great. First, I would suggest getting their newest API package for your favorite platform.

I use visual studio, so I will have biased examples. I could regurgitate their examples, but mine hopefully don't suck.

From a simple command line application. You may get into some static thread issues, but they are generally avoidable.

try
{
InterfaceKit ik1 = new InterfaceKit();
ik1.open();
}
catch (Exception _exc)
{
Console.WriteLine("interface kit error : " + _exc.Message + "\n",);
}

This is pretty simple, and just opens the IK for management. You have to set up some events. In a constructor method, I dump the following code after you execute the Open() method.

do
{

if (ik1.Attached)
{

Console.WriteLine("ik1 attached.");

//interface kit events
ik1.Attach += phidgets_Attach;
ik1.Detach += phidgets_Detach;
ik1.Error += phidgets_Error;
ik1.SensorChange += _ik1_SensorChange;
ik1.OutputChange += _ik1_OutputChange;
ik1.InputChange += _ik1_InputChange;
}
else
{
Console.WriteLine(("retry : " + retry + " waiting for ik1 attach");
}
retry++;
} while (retry < 10 && !ik1.Attached);

I like these do loops for the initialization setups. Why, because it retries and does not fail just because. It says which instrument is having an issue before it proceeds. I use the phidgets_xxx methods to have a standard system for handling the major Phidgets methods .

#region Phidgets event handlers
///
/// handle the phidget device discovery events
///

/// /// protected void phidgets_Attach(object Sender, AttachEventArgs Args)
{
try
{
Console.WriteLine(Args.Device.Type + " attached.");
}
catch (Exception _exc)
{
throw new Exception(className +
" protected void phidgets_Attach( Sender, Args) :: " + _exc.Message +"\n");
}
} //phidgets_Attach
///
/// handle the phidget device discovery events
///

/// /// protected void phidgets_Detach(object Sender, DetachEventArgs Args)
{
try
{
Console.WriteLine(Args.Device.Type + " detached.");
}
catch (Exception _exc)
{
throw new Exception(className + " protected void phidgets_Detach( Sender, Args) :: " + _exc.Message +"\n");
}
} //phidgets_Detach
protected void phidgets_Error(object Sender, ErrorEventArgs Args)
{
try
{
Console.WriteLine("phidgets error : " + Args.Code + " " + Args.Description);
}
catch (Exception _exc)
{
throw new Exception(className +
" protected void phidgets_Error( Sender, Args) :: " + _exc.Message +"\n");
}
} //phidgets_Error
Then some simple methods to catch changes per port. I connect different instruments to each port. I have to calibrate the readings for each port so that the readings make sense directly from the Read() methods.

protected void _ik1_SensorChange(object Sender, SensorChangeEventArgs Args)
{
try
{
InstrumentType _type = InstrumentType.notset;
int _index = 0;

switch (Args.Index)
{
case 7:
_type = InstrumentType.notset;
_index = 4;
break;
case 6:
_type = InstrumentType.notset;
_index = 3;
break;
case 5:
_type = InstrumentType.notset;
_index = 2;
break;
case 4:
_type = InstrumentType.notset;
_index = 1;
break;
case 3:
_type = InstrumentType.notset;
_index = 0;
break;
case 2:
_type = InstrumentType.notset;
break;
case 1:
_type = InstrumentType.gyro;
gyro.SetRollData(
convertVoltageToUnit(InstrumentType.gyro, Args.Value, 1));
break;
case 0:
_type = InstrumentType.gyro;
gyro.SetPitchData(
convertVoltageToUnit(InstrumentType.gyro, Args.Value, 0));
break;
}
if (_type == InstrumentType.ultrasonicRangeFinder)
{
rangeFinders[_index] = convertVoltageToUnit(
InstrumentType.ultrasonicRangeFinder, Args.Value, _index);
}
Console.WriteLine(className +
" instrument change " + _type + " value: " + Args.Value);
}
catch (Exception _exc)
{
Console.WriteLine(className +
" protected void _ik1_SensorChange( Sender, Args) :: " + _exc.Message + "\n");

}
} //_ik0_SensorChange
protected void _ik1_InputChange(object Sender, InputChangeEventArgs Args)
{
try
{
}
catch (Exception _exc)
{
Console.WriteLine(className +
" protected void _ik1_InputChange( Sender, Args) :: " + _exc.Message + "\n");
}
} //_ik0_InputChange
protected void _ik1_OutputChange(object Sender, OutputChangeEventArgs Args)
{
try
{
}
catch (Exception _exc)
{
Console.WriteLine(className +
" protected void _ik1_OutputChange( Sender, Args) :: " +
_exc.Message + "\n");
}
} //_ik1_OutputChange

///
/// converts the read value, probably in volts to the correct units
/// this will also apply a 6:4 smooth, old value*0.6 + new value*0.4 to help
///

/// type of instrument read, to get the correct coeefficients/// reading of the instrument/// some instruments are indexed/// the calibrated value
protected double convertVoltageToUnit(InstrumentType Type, double Reading, int Index)
{
double _value;
try
{
int _InstrumentTypeIndex = 0;
double _old = 0;
switch (Type)
{
case InstrumentType.accelerometer:
_InstrumentTypeIndex = 2;
_old = rawAcc[Index];
break;
case InstrumentType.gyro:
_InstrumentTypeIndex = 3;
_old = gyration[Index];
break;
case InstrumentType.pressureStatic:
_InstrumentTypeIndex = 0;
break;
case InstrumentType.temperature:
_InstrumentTypeIndex = 1;
break;
case InstrumentType.ultrasonicRangeFinder:
_InstrumentTypeIndex = 4;
_old = rangeFinders[Index];
break;
case InstrumentType.servo:
_InstrumentTypeIndex = 5;
_old = servos[Index];
break;
case InstrumentType.notset:
break;
}
double A = double.Parse(
AppSettings["instrumentCalibration" + _InstrumentTypeIndex + "A"]);
double B = double.Parse(
AppSettings["instrumentCalibration" + _InstrumentTypeIndex + "B"]);
double C = double.Parse(
AppSettings["instrumentCalibration" + _InstrumentTypeIndex + "C"]);
//apply basic filter to smooth the data
// value = Ax^2+Bx+C
if (_old != 0)
{
}
else
{
}
}
catch (Exception _exc)
{
throw new Exception(className +
" protected double convertVoltageToUnit( " + Type.ToString("0.000") +
" , " + Reading.ToString("0.000") + " ) :: " + _exc.Message + "\n");
}
return _value;
} //convertVoltageToUnit

It is pretty simple. I then just use events to manage the reading of the ports or inputs. It is really easy at this point. I put the whole session into a System.Timers.Timer loop or a Console.ReadLine system to control the starting and stopping of the application.

### A Stability Manager is Good to Have, But You Could Wing It

More about the loathe and planning of your own autopilot. I used a hopefully simple system to handle the stability manager. I set up a system that uses a simple interface for strategies to get to keep the plane in the air. From this strategy interface, I developed several strategies that specialized in different aspects of managing the airplane's performance.

The strategies then calculate a solution and then pass it to the task manager. The servo manager pops the solution off the task manager's stack and gives. From the solution the servo manager converts the new inputs to a servo position map. It converts the map to servo motion and executes the changes. The Phidgets controllers that we use allow for several settings to be set as it moves the servos into position.

It may seem a bit "math nerd", but the cool thing about the Phidgets controller is that you can set the final position, the speed and the magnitude of the acceleration used to make the changes. In general, I leave it to figure out the acceleration. However, in coordinated systems it may be important to set the jerk of the servo arms so that you do not lock up linkages or other mechanical interfaces to the servos.

My strategies are simple. I tried to decorate a base object with virtualized methods, that way if I missed or the method's implementation did not just come to me the methods would have something by way of implementation. This seems to be a good way to handle this kind of "how the heck do I do this" programming. I am sure there are more professional ways of doing this, but here is how I did it.

///
/// used in the strategy design pattern, so that you can use different strategies and change
/// them by delegate
///

public interface IStrategy
{
///
/// execute the strategy but do not return any results
///

void Execute();
///
/// exectute the strategy, return the result
///

/// result of the strategy
object Execute(bool bResults);
///
/// exectute the strategy, return the result and a object
///

/// return a second resultant object /// result of the strategy
object Execute(out object Result);

}

public interface IStrategyResult
{
object ShowResult();
}

Now these are interfaces that I used to make a series of strategies to balance the different flight aspects and how to keep the system on course. These are run through at about 20Hz and the results are compared at 10Hz. That way decisions can be made at 5Hz and you can keep the whole thing from bumping into the data collection and servo motion phases. It happens, but if the frequency is high enough, we can miss a solution or two and still make it in time to decide which solution was best.

A simple markup model then decides which result is most successful and passes it down the chain. My biggest troubles are always how to correctly smooth the position and orientation data. They are effected by several different coupling and acceleration balances. I have heard about these Kalman filters that everyone loves so much, but have yet to be successful implementing one myself from scratch. I usually keep a simple integrator system to keep the changes small(er) and tend to stay in constant speed-level flight if at all possible. That is better for overall system performance anyway. You have to pay the piper for every energy change.

## Tuesday, October 13, 2009

### Business Opportunities in Central Europe or Brazil

Buzz Labs is looking for business contacts in Central Europe or Brazil. We are seeking interesting applications of UAS and UGV in most places or partners to help develop certain aspects of technologies for related projects. Our business is about developing technologies to support unmanned systems in general or how to achieve certain commercial or scientific requirements.

Maximizing our understanding of networking and aviation knowledge is allowing for fascinating new insights into what is possible even with the current states of technologies. We are most interested in multi-spectral image analysis, distributed pico-computing, alternatives to radio control, streaming technologies,power systems and system configuration.

Central Europe and Brazil are great stable markets that we would like to become a bigger player in. Please feel free to contact us with questions or ideas.

info@fatmanflying.com

## Saturday, October 10, 2009

### Wing Loading, Bah! Land It Like A Man! Full-Throttle and Nose Up!

Recently, we have heard lots of talk about our normally well loaded wings. The balsa versions of our planes work great, and have normal stall speeds around 15-20mph. That is great for normal flying. They are hybrid flying wings and should fly like a Dutch Roll resistant frisbee.

I have been building the carbon fiber and Aluminum versions. They are a bit heavier than we expected, but that is due to real materials. We certainly will take out some of the fat in future revisions, but I think that the issue here is approach speed. You bunch of sissy girls. These planes are UAS, unmanned means the computer should be doing the work to bring the plane in on glide path. I hope our spars are strong enough for the cut the power approach.

What the heck am I talking about? Nope, never been asked that in polite company either. Wing loading is a relative measure that comes out of the basic low-speed aerodynamics of any fixed-aircraft. In some ways, it is a measure of the relative performance of a device with respect to constant thrust. It is most easily expressed as the mass of the aircraft divided by the wing area.

This essentially describes how strong the pressure difference must be between the surfaces to keep the plane in the air. Considering that the higher the wing loading, the larger the drag due to lift turn will be. This will change the trim characteristics of the plane in cruise and require more thrust to keep the plane above stall. Any F-4 or F-15 driver will tell you that more thrust is the answer to everything.
"The critical limit for bird flight is about 5 lb/ft² (25 kg/m²)[3]. An analysis of bird flight which looked at 138 species ranging in mass from 1x10-2 to 10 kg, from small passerines to swans and cranes found wing loadings from about 1 to 20 kg/m2[4]. The wing loadings of some of the lightest aircraft fall comfortably within this range. One typical hang-glider (see table) has a maximum wing loading of 6.3 kg/m2, and an ultralight rigid glider[5] 8.3 kg/m2." - wikipedia

The wing loading also changes the stall speed. If you have to use forward speed to generate enough lift for a given flight regime, you have to go faster to balance the weight of the aircraft with the lift generated. In a flying wing, you cannot just pick the nose up, roll control has to be gentle. The old adage "Little planes add flap, big planes add power" is our friend. If you were interested the effect of wing loading on stall speed is expressed as the following.

$\textstyle v^2=\frac {2gW_S} {\rho C_L}$
v^2 is the stall speed
g is the acceleration due to gravity
rho is the density of air
CL is the coefficient of lift of the net wing

Another interesting equation, is the rate of climb. This is just a force balance between the net acceleration, lift generated and the weight of the device.

$\textstyle a_c=\frac{1}{2W_S}v_c^2\rho C_L -g,$

ac is the climbing acceleration
vc is the new airspeed
rho is the density of the free stream air
CL is the coefficient of lift of the net wing
g is the acceleration due to gravity

The wing loading term is in the denominator, so if you want to climb faster you need to lower the wing loading, or increase speed. Increasing the speed is way more fun than having gossamer wings. More Power!

We are not so bad, we are in the 23-35 kg/m^2 range for our wing loading depending on the equipment load out. Less than a some gliders. We however, cannot skimp on pimping the power plant. As if we would do that.

swan           10
Buzz Labs Schoolgirl UAV         23
Nieuport 17         38
Cessna 152         51
B-17                    190
F-104                  514
A380                  who cares, it is an airbus
B747                   740

We already know you can do your approach at a reasonable speed, but why? Sensible approaches are for people who do not think that 10 ft/s  sink rates are for roller coasters. Land It Like a Man, Full Throttle and Both Hands on the Stick. Or, just let the autopilot do it, it can tell how far it is from the ground and cut the power at stall two inches off the ground.

If the women don't find you handsome, they ought to find you handy!

### Motors, motors, motors

Did anyone else ever notice that anything truly important must be said thrice for men to notice? Girls, girls girls, XXX, I have a million examples. Just an aside, but interesting to note. That may be why no one ever reads DANGER, and stays away, Or don't click.

This week I have been fighting the forces of evil all week. I know the buzzlabs.us site is not a work of art. It is a work in progress. Yeah, yeah need more pictures. Let's get one step at a time. I found several distributors of NEU motors whose website said, yeah they are in stock. Seems that they need to update their site? WTF still are not online with inventory status... or are not advertising things correctly. If you don't have it, just say so.

Anyway, there I was thinking I had the NEU motors in hand 1912/3Y jobs. however, it seems that there is no such thing as in stock with these guys. They are supposed to be the best. I have been reading that the heli guys dig the X-Era motors from NC too.

I called Dave at X-Era. He was friendly and very helpful. I will keep up with his 4035 3y 400kv motors. He is actually sizing a motor for us to swing these huge props. I am actually pretty curious what he will come up with. The NEU calculator came up with the 1912 line of motors because of the three bladed props. I will post what he says when I get back his response.

If anyone knows of any 5-6 bladed props that would be good to know too. I heard on the boards that Dario makes them. It is my opinion that I can slow the prop down enough by adding blades to lower our planes and reduce the current draw by adding voltage. I would love to get our current draw to 20A at 9s (33.3V) our batteries would last for a lot longer. I think maybe Bolly Props down in Oz may have them, but three bladed jobs though.

## Saturday, October 3, 2009

### Of Directors and Other School Marms

In my model, there is one king pin class. It does all of the general start to stop management of the system. The director class is where all of the managers hang from. I do not really hang a lot of factories off of the director, the  decorations to the class are minimal.

At start up, the director sets all of its properties from the configuration files and starts the managers. The managers in my model are:

• instrumentation manager, im
• ai manager, aim
• stability manager, stab man
• communication manager, comm
• display manager, display

This can be relatively complicated since the observation system must be handled by the director. The data flows from the instrumentation manager to the stability manager (stab man observes im). Task managers observe the communication manager to queue jobs, they also watch the stability manager. The Instrumentation Manager also watches the task manager, but only because it also contains the servo manager. Tasks go from the stability manager to the task stack and are then sent to the servo manager.

I thought this was the best way because whether or not the communications manager has a command, the task queue will be populated. The servo manager will then grab tasks and handle them in a first come first served pattern. If a command expires by sitting in the queue too long it is dumped by the task manager when the servo manager pops it off of the queue. As the system becomes more complicated, there could be more queues to do more things. However, tasks that require moving a servo always go to the servo queue.

I have read the great MAV blog by Tom Pycke. He has lots of things to say on the topic of real time operations. However, my idea is that they are not really so important. I do agree with him that his way of dealing with garbled or miscommunicated commands to the MAV is pretty interesting. My way seems to be not so bad either. The system must be able to fly itself, you tell it a basic plan to fly along. Any command inputs from the ground station are "exceptions".

Exceptions are easier to manage because you do them as soon as possible. When they expire you stop doing them, the system still tries to make the best of the situation and go back to the plan. This guidance could make a complete mess that breaks everything, in the case of coordinates that are for a different city. Your system may start flying off in a crazy direction trying to get home. However, that is why there are a few directives that the director knows and is the only one that can execute.

I keep a few tasks in the clip for a special occasion. My "director-only" tasks are:
• shutdown
• go to safe altitude
• land immediately
• idle
Shutdown is a good example of a game ender. If you shut down the computer systems and the current. For a UGS it may not be so bad, a UAS just falls out of the sky. In my overly instrumented systems, we have an aircraft on ground sensor. It is nothing as cool as a weight on wheels, but it does check to see how far the fuselage is off of the ground. This is to enforce a flight floor system, only in certain situations will it not try to climb to the safe altitude. In general, the safe altitude should be above the tree line. Altitude is safe for a UAS, this is not as necessary for a UGS but it can be interesting to have. Land immediately works for either kind of system, it forces the system to plot the fastest route to the start area/landing strip. Once the system arrives in the pattern, it will begin normal landing/parking processes. Idle is similar to shutdown, but it does not cut the power. It however, means RC override in implementation. The computer lets the system freewheel, this can be useful if the computer keeps doing crazy crap and you have to bring it back.

The ai (artificial intelligence) manager is the basis for the flight state. It reads the preset," at least we have somewhere to go" map and determines which mission phase the system should be in. Mission state is important information because it will time the use of instrumentation or cameras, as well as when to hit the "panic" button. The stability manager uses this to set system configurations and movement regimes for the  system.

A director class is a operator, directing the data from one manager to the other. The AI manager is where the state manager exists.  More on the AIM later, check in tomorrow if I am feeling feisty and have not cut myself up or glued something good to the table.