Op amps as comparators, be very very wary

I’ve have hundreds of thousands of units in the field where I used an op amp as a comparator, but its not a calkwalk by any means… in fact, as a friend says, its challenging Mr Murphy. I do not recommend doing so… but, marketing cost targets and pcb real estate limitations, combined with a left over op amp in a quad package may end up making it worth considering. Thus, lets look at a few issues.

Op Amps are dog slow compared to comparators

The most apparent issue is op amps are dog slow compared to comparators. Now, if your signals are pretty slow, speed is likely a non issue.

Be wary of an op amps output topology and power supply

Next is the matter of the op amps output. Ie, if you are using bipolar power supplies for the op amps, and the op amp/comparator feeds a micro, you need to play a few games, and that will add some cost and real estate. Or perhaps you are able to run the op amp on a unipolar supply, and it has a rail to rail output, and you will use the same power supply for the micro… expect Mr Murphy to arrive, as you chase op amp instability due to unintentional positive feedback (mostly due to common mode effects). Such scenarios can be a real bear to deal with, requiring untold number of pcb revs to make things happy and stable… and what if the pcb gets dirty with age, throw in a little leakage to create a positive feedback path, and now you have field failures left and right.

If you are tempted to try something like this, add a second analog supply, separate grounds, use faraday shields, and consider conformal coating. Then, once you are all done, go hammer on the input, and look for any signs of instability… hammer it hard, you may be surprised that you still have work to do. With such a topology, you are likely asking to see Mr Murphy at every turn, so put up lots of stop signs, and once installed, hammer them to make sure they are solid.

Phase Inversion, oh noes!

Then comes phase inversion… yes, the term op amp vendors dont like to talk about. Comparators are designed to have a substantial differential input voltage and the resulting currents to some extent as well, op amps on the other hand, are applied where the differential input voltage is theoretically zero, same with input current (Vos, Ib, and layout issues obviously preclude it from being zero, but you get the idea..) If you go outside of the maximum speced differential, expect that you might see phase inversion. It happens with a lot of common op amps, perhaps less so with todays designs than years ago… but no one likes to talk about it. DO NOT EXPECT SPICE MODELS to show this, in fact dont expect spice models to show much of any real world behavior…. Also be aware, it is possible each time the op amp gets whacked with an out of spec differential, it may be degraded permanantly… Not a good spot to be in.

Internal Protection Diodes may bite

And speaking of damage…. some op amps have internal protection diodes, so, if you go outside of the differential input specs, you fire them up… expect all sorts of bizarre and unexplained behavior. This could include thermal issues on the die adjacent to the internal ground making for all sorts of fun scenarios, long after the op amp signals have returned to a nominal state.

Apart from damage, pretty much anytime you get very far from zero volts differential on an op amp input, performance specs can get very dodgy. Sometimes manufacturers will spec out how performance degrades… often times, being such is a misapplication of an op amp, they leave that information off the datasheet. And yes, Spice models as a general rule wont tell you either, even more so, if temperature varies.

Dont use op amps as comparators, but if you must

So… what to do… dont try it, but if you must, go over the datasheets with a fine tooth comb. Look for gotchas on the input and output specs. Be very careful to avoid unintentional positive feedback paths. Give the apps guys at call at the factory, and ask em straight out. Can I do this… they will tell you NO, but they may offer particular suggestions which might help. They know folks misapply their parts all the time… They also know that some op amps plain and simple will not work as comparators no matter how much tweaking one does. In other cases, they must admit some models can do well in such a topology, provided the designer does their homework ahead of time.

Bad Economies Spurring Innovation? Accounting is key

I came across this blog post, and it made a lot of sense….

Lowering the water level: Do bad economies spur innovation? – Venture Hacks .

But, the problem is getting there, and I think accounting procedures for many businesses are likely a key to successful innovation, or the potential to go under in a huge way.

In good economic times, its easy to play accounting games and shift expenses from a pet project to a mission critical area. Ie, the new whizbang has tons of overhead, so the solution is to shift the accounting for that overhead to existing cost centers where it likely will remain hidden. Then multiply this by tons of pet projects, and all of a sudden, rather than a pet project having real costs associated with it… it seems a no brainer. Thats fine, until the cost centers end up bloated, and prime targets for budget reductions. The end result is, many core functions end up taking a hit, all the while the pet project looks good, at least for a while.

As budgets continue to shrink, the core function cuts will go too far,
and that will hopefully force a restructuring of accounting games,
such that the costs for pet projects become much more accurate.

Slashing key infrastructure only goes so far, before an entity can no
longer support its existing customers, and ends up going under.
Hopefully the downturn will foster more accurate accounting in time to put real numbers behind ALL functions. Not numbers good enough to continue funding pet projects for the next quarter, all the while
letting core functionality go south as all too often happens in large organizations.

Yes, I know its odd for the tech guy to put accounting on a pedestal, but cash flow and its proper allocation becomes really critical in a downturn… and if it uncovers inefficiencies and bad allocations of resources, it gets every one on the road to recovery and innovation a ton faster, as contrasted with riding the fake numbers as the ship goes down.

Intro to Disappearing Filament Optical Pyrometers

Conceptually, these types of the devices are fairly easy to understand, everyone is familiar with the term red hot. In the radiation thermometry arena, all we do is apply numerical values correseponding to temperature readings based upon the radiation which is emitted. With a disappearing filament optical pyrometer, one visually looks at the color of a hot surface, and compares that to a thin heated wire, where in the temperature of the heated wire is a known value. When the heated wire is at the same temperature as the surface one is looking at, it effectively blends into the background.

In fact, to those skilled in the field, one can even come pretty close (a few hundred degrees F) to estimating the temperature of a known material just by viewing it with the naked eye. I blew away the Minardi F1 guys, by telling them their brake rotor temperatures via observation, back when I worked with them years ago.

Granted, instrument design, calibration, and accurate temperature measurements are a lot more complex than this simple explanation, but as an initial post on the subject, I thought it best to start off with a very simplified layman’s approach.

For a more detailed explanation, the guys over at Spectrodyne have a nifty graphic. Way back when, I did a lot of work with them, they are one of the ultimate calibration houses out there, albeit their focus is limited to a few specific manufacturers. They also repair and calibrate Radiamatic sensors, which played a large part of my life for a number of years.

Granted, these types of instruments have pretty much all disappeared save the retrofit market, and the Spectrodyne model. In part, being they require an operator to visually make a call, wide variances can exist from operator to operator, but also different materials require different spectral regions for measurment, and being the human eye is limited to the visual spectrum, significant error can be introduce. These factors combined with a need for tighter and tigheter measurement accuracy really limit their application. Yet, for ease of explantions, the basic operation is something most everyone can relate to.

Open Source Hardware and the Economy of Scale

In the software world, scaling comprises two parts, the technical aspect of whether the application will scale as users grow, and the marketing aspect. The techical part is that it wont require an exponential increase in server / computing capability, and ideally, such costs per user would drop as more users are added. The marketing part, is the marketing overhead per user drops as the application grows.. always a tricky part with any type of business, but with open source, perhaps even more critical.

In the hardware world, economy of scale also comprises two distinct parts. First, raw materials/components prices drop as the line item purchases become larger. Ie, if I want to buy one rca jack, its $2 at Radio Shack, it I want 1,000,000 of them, I can even get them customized for a fraction of that cost. Its even more dramatic with enclosures, ie getting 5 thermoformed enclosures might end up costing $500 each, where as getting 500,000 injection molded enclosures might drop the cost down to $0.50 or less.

Then there is manufacturing overhead… everyone would like to build small quantites economically, but when it comes to electronics, often times the setup costs are tens if not hundreds of times the individual piece part costs. Ie, it might take 20 seconds to populate a large dense circuit board, but it takes 4-8 hours to program, load, and test the assembly equipment the first time to make it so. A similiar deal exists in test engineering… ie, a board level test fixture costs $2500, and whether you run 10 or 100,000 units through it, (assuming one already has a lab view style master test console) the fixed costs remain the same. Lastly there is the knowledge base of the line technicians… a 10 piece run does not develop a knowledge base to allow fast rework/repair, or troubleshooting, where as a 10K run pretty much means the line techs are fully up to speed and ready to roll.

All of these factors taken together, make small production runs of open source hardware problematic. Granted, if the margin is there either by uniqueness or customization oppurtunities, its much less of a problem, but for low margin products, its a real challenge.

Some of the ways to mitigate this, are to choose parts which keep the bom cost at a minimum to start with. Ie, avoid $25 highly specific parts, even though the prices drops like a rock with volume. Another solution would include test fixture designs with the design, such that test engineering overhead is minimized. And of course, using production notes to get line techs up to speed, well before they have run 10,000 plus parts. It may be that the use of a collaborative wiki where all manufacturers can chime in with ideas, problems, and fixes may also be of great help in keeping the economy of scale manageable for low volume production runs.

Production Ready, Enthusiasts, Concepts and Production Notes

Within the open source hardware domain, there is a wide range of approaches, everything from conceptual designs, not far from the lunchtime napkin, all the way to production ready. Granted, a full blown design with gerbers, bom, avl, mechanicals, production notes, including pick/place targets is easy to spot, just as scans of ideas off napkins or notebooks, its really the projects in the middle that are hard to make the call upon.

Granted, if one is going to build 1, or perhaps a hundred, pick/place targets are likely not of great value, but production notes often are.. and often times, they are the most critical. Ie, things like ferrite beads, and the key role proper temperature profiles play, or perhaps issues like potting, and how to prevent it from migrating into the connectors etc.

And production is really where the rubber meats the road so to speak. Back in my contract manufacturing days, it was often said, most anyone can build one, the challenge is building volume, and indeed that is all too true. It could be production tooling, calibration, test selects, final test, qualification, rework, common failure modes, or any number of factors. A few pages of notes can make the difference between great success, or huge frustration and potential failure.

Thus, as I start posting designs, I will be sure to include production notes, even things that should be obvious, ie the ferrite bead issue is just one of many.

SEO and Page Titles

One of the things I’ve learned in my SEO travels, is that search engine rules evolve over time, and are becoming closer and closer to what a user wants, as contrasted with what the search engine finds pragmatic.

As such, a meaningful page title makes a lot of sense. For example, in this case, the title is SEO and Title Pages[Ron Amundson] While its not super friendly, it does serve to identify what this page is truley about. In the ideal case, my blog software would allow me to set the title page in a more readable fashion. However, be that as it may, at least, if a user looks at the title, they have something readable that makes sense.

Often times, page title names are either computer generated, and thus not terribly intelligible, or in other cases, they are a default setting spread across a whole web page. Both of these scenarios are not the greatest for users, nor are they conducive to a search engine.

The other thing to keep in mind, is that the title of a page, should indicate what is on that page. I know it seems obvious, but it seems that many in the web community seem to miss the fact that web pages are read by humans, and no matter how much you choose to game the search engines to visit your page. If a user shows up, and then immediately hits the back button, you never had a chance to tell them about your content anyhow.

There are cool tools available to verify keyword density, such that you can verify your title indeed reflects the content of your page. Here is one super cool such analyzer http://www.ranks.nl/tools/spider.html.

Why I chose dokuwiki for my page

I actually debated on how to set up my webspace. With my other web pages, I typically use a CMS, and while those are incredibly cool. Yet they are not simple to jump in, create content and jump back out again. This is especially the case, should one want to dynamically organize a website as it evolves. For example, on my flight instruction website, I already knew the categories I wanted, and the focus each category should have. On RonAmundson.com this is a bit more challenging.

The intent is to showcase some of my groups technical skills, with white papers, blog entrys, and also serve as a sounding board for ideas which don’t fit the inventor or hobbyist mindset which I present on www.inventorsgarage.com and www.kb0pax.com

The difficulty is that we focus in a lot of areas as we bring a product to market, or solve production problems. For example, as process troubleshooters, it would be easy to create a whitepaper on temperature control of surface finishing processes, or another one on the dynamics of thermoforming, or even tweezer welding. Yet, we could also look at product development from the standpoint of prototypes, feasibility studies, Beta testing, or even product management.

Now, the marketing side of me says whoa….. you idiot, focus, focus, focus, and key your marketing to your focus. That certainly is true for marketing purposes. There is a certain lack of credibility that occurs when you see a group advertising they do everything under the sun. We don’t, being a small team of experts, we can’t, and there is no way we are going to promote ourselves as such. But we do have a very diverse technical skill set, and we have connections with a wealth of others.

As a result, the wiki format allows me to focus pages on discrete sections of product development, and/or process troubleshooting, and easily evolve the main pages to direct our clients to the needed area, without coming off as an outfit that does everything, but is a master of nothing. As a result, people looking for a specific section in a search engine, may well find a key whitepaper that is helpful to them. Yet, if someone goes to the homepage, more than likely, they will see our focused areas of specialization, rather than the huge knowledge base that exists underneath.

Google Docs, a cool collaboration tool

Google Docs is super cool. Currently, I am working with 5 different collaborators from all around the world, on a number of different projects. Google docs provides a framework for revision tracking, world wide access, and the ability for multiple users to work on a document concurrently.

Not only is a Word type format available, but also a spreadsheet. Both seem to work pretty flawlessly, and a little window pops up to let you know more than one person is editing a document at once.

Considering the magnitude of some of our projects, google docs is truely a life saver. I am totally awed. Now, if only they could add mechanical and electronics CAE tools. 😀
Google Docs, a cool collaboration tool · 2007/03/26 20:45

Webpages with Zero Meaningful Content

Webpages with zero meaningful content

This is a beef of mine. In the early days, one would often come across these “under construction” graphics, or links to pages with zero meaningful content. Now, it seems they have been replaced with zero meaningful content, but with advertising, or pages, primarily used to game the search engines.

I have always tried to stay away from such pages. Yet, when doing a top down design, it is often difficult, not to just put in header pages with zero content, for organizational purposes. This is one of the key reasons I chose a wiki format for this webpage. The architecture is dynamic, and thus there really is no need for zero meaningful content.

However, I am going to break my rule of thumb, and thus create one. The intent is not to lead people astray, nor game the search engines, but rather, serve as my scratch pad and web page todo list. I’m finding I need a wiki webspace, to keep track of ideas and concepts, rather than keeping it on the desktop, or in google docs. Ie, the intent being, that as an idea is developed, the subpage goes live, with the addition of wiki tags, rather than a cut and paste.

I think there is a way to set up my robots.txt page to disallow access, as I do not want robots indexing it, should some webizen happen to be searching for a term, and then be aggravated when they come up with a page with zero meaningful content. Until I figure this out, my apologies for anyone that comes across it.

Leadership Analysis

I always seems to find the challenges, or as open desktop mechanic states, doing things just this side of impossible. Well, now the fun begins.

I get to analyze an organization for operational and leadership effectiveness. Granted, years back I did industrial consulting on the tech side, and being exposed to a multitude of businesses, does give me a unique view. Its often said, that tech problems are easy, the tough part is how to appropriately manage them.

Thus, its time to get the books out, do some digging, and refresh my mind of how to go about this. Initially, I thought of 360 degree feedback, as that was sort of the buzzword years ago. Yet, finding appropriate forms, and the ability to interpret such data, when one is not totally up to speed, can give less than valid results.

As such, I’m going the old and simple way. Observation and reporting as a third party outsider. Since I won’t be privey to all the details, its a bit harder than if I were an insider. Yet, the lack of bias, is probably what this organization really wants, as contrasted with a rose colored glasses, or a half empty approach.

Thus, its leadership checklist time.

Because analog is cool!