BMOW title
Floppy Emu banner

Archive for the 'Dev Tools' Category

PCB Autorouting

The Apple II Daisy Chainer for Floppy Emu is done, and I’ve ordered a few PCBs made for testing. This proved to be a surprisingly challenging board to lay out and route the traces. It’s a simple two-layer board, and I wanted to keep it as small as possible, so the signal routing quickly became very congested. Hand-routing of the first 40 traces took almost a whole day of tedious work, so I finally gave up and tried autorouting the remaining 80 traces. And it worked! The result looked pretty good, too. I’m curious to know how autorouting has turned out for other hobbyists. Have you ever autorouted nearly an entire board? What were the results?

I haven’t had good results with Eagle’s built-in autorouter, so for recent projects I’ve been using a stand-alone autorouting software package called Freerouting. It’s a Java-based app with a slightly awkward UI, but it’s impressively powerful. First you export your Eagle board file in .DSN format, using a script provided by Freerouting. Then you import the DSN file into Freerouting and let it work, running for about 1 to 15 minutes. When it’s finished you can export an Eagle session script, which will rip up all your old tracks and lay down the new ones calculated by Freerouting.

Obtaining good autorouting results requires careful setup of Eagle’s design rules. For example, the default might be to use 8 mil traces with 8 mil minimum spacing between traces, but power and ground might be treated as a separate net class requiring 20 mil traces for extra current-carrying capacity. The design rules also define what size drill to use for vias, and how close to the edge of the board it’s OK to route traces. The autorouter takes all of these constraints into account when it does its job.

So what’s not to like about the autorouter? Can a skilled human do a better routing job? Why not autoroute everything? Here are a few drawbacks that I found.

 
Fixed Component Placement – When I route by hand, I’ll sometimes discover that I’ve placed a component in the middle of a chokepoint for PCB traces. If the placement isn’t critical, I can move the component to make routing easier. The autorouter can’t do this, at least not that I’ve seen. It treats the component placements as fixed and immutable, even when nudging a resistor 10 mils to one side could make a huge difference.

In theory there’s no reason that the autorouter couldn’t handle changing component placements. Additional constraint information could be supplied, providing a bounding box of acceptable placements for each component, or tagging some component placements as critical and non-movable.

 
Unnecessarily Aggressive Routing – If the design rules say that 8 mils is the minimum acceptable spacing between traces, the autorouter will often stack traces 8 mils apart even when there’s plenty of room for wider spacing. It’s too aggressive – pushing spacing to the limit when it doesn’t need to. In theory this shouldn’t be a problem, but it feels like poor design practice to me. To avoid the risk of accidental short circuits, I prefer not to push the PCB manufacturing limits except in those few areas of the board where there’s no alternative.

A good example is autorouting between adjacent pins of a 0.1 inch header. There’s roughly 30 mils of space between the metal rings that surround adjacent pins, providing enough space for an 8 mil trace with 11 mils of space on both sides. But Freerouting often placed such traces off-center, with 8 mils of space on one side and 14 on the other. It met the design rules, so from its viewpoint there was no reason not to do it that way.

In practice an 8 mil minimum spacing is already conservative, and most PCB manufacturers can do 6, 5, or 4 mil minimum spacing. So it’s unlikely this behavior will cause any trouble here, but it might for other boards with smaller minimum spacings.

 
Nuances of the “Best” Routing – Once Freerouting finds a routing solution, it tries to optimize it by minimizing the total trace length and the number of vias. These are good optimizations, but sometimes I want other considerations to trump them. For example when I route manually, I try very hard to minimize vias in power and ground traces. I would be happy to add 5 extra vias elsewhere if I could eliminate one via for power or ground. Freerouting doesn’t support this kind of trade-off, so all vias are treated equally.

A related consideration is the number of power and ground paths. The autorouter considers a pin to be powered if there’s any combination of traces leading from the power supply to the pin. Only one such path is required, and the result is a logical tree with the supply at its root and the power pins at the end of the branches. But when I route manually, I often connect branches back together, creating multiple paths from the power supply to the pin. I use the same approach for grounds. The goal is to minimize the voltage difference between power and ground points on different regions of the board.

The autorouter doesn’t care about visual aesthetics, and normally I don’t either. But when two possible routes are equally effective, I normally choose the one that’s similar to other traces I’ve already routed. It makes the board’s interconnections easier to understand at a glance.

Freerouting also doesn’t account for any potential analog signal effects, at least not that I’ve seen. If it’s important to minimize the trace length for a specific signal, or match the trace lengths of two signals, it can’t help. It also doesn’t consider problems like the parasitic capacitance formed by two long parallel traces. Depending on the signals, the signal speed, and other factors, a skilled human might choose to route two signals on non-parallel paths even if parallel routing was the shortest distance with the fewest vias.

Do you have any autorouting secrets to share? Let’s hear them!

Read 5 comments and join the conversation 

Atmel ICE Wiring Horror

The Atmel ICE programmer/debugger has its SWD connector pins reversed from the standard ARM Cortex debug connector. Arghh… why? It’s the same physical connector (5×2 0.05 inch polarized male), but the pins are rotated 180 degrees from the standard, as if the cable were plugged in backwards. Incredibly, it also ships with a 180-degree reversing cable, so it works – as long as you use their cable. It’s not a mistake: the product is actually designed around a reversed connector with an un-reversing cable.

WHO BUILDS A PRODUCT THIS WAY?? I shake my fist at you, Atmel hardware designer. This is like designing an electric outlet where 110V and ground are swapped from their normal locations, but bundling it with an appliance cord that unswaps them.

The problem began when I grew tired of Atmel’s tiny 4-inch cable, and decided to get a replacement cable. Naturally I used a standard 10-pin ribbon cable with 5×2 0.05 inch connectors on each end: Adafruit’s purpose-made SWD cable for ARM development. I couldn’t understand why it didn’t work, and why strange things happened when it was plugged in.

After scratching my head for a while, I noticed something odd. The title photo shows both the original cable and the Adafruit cable connected to the Atmel ICE. Notice how one has the red stripe on the right, and the other on the left? Ouch!

From looking at the original Atmel cable, it’s not at all obvious that the pins are reversed. It looks like a straight-through cable, because each connector is crimped straight on to the 10-pin ribbon, with no crossing wires. But a more careful inspection reveals that one of the connectors is crimped on so it’s facing the opposite direction of the other, resulting in a 180-degree rotation of the pin assignments. It violates the rule of the cable’s red wire indicating the location of pin 1. (Ignore the extra 6-pin header, which is used for other devices)

The good news is that there’s no permanent damage to my ARM board. The bad news is that I still need a replacement cable, but now I need to build a reversing one.

Read 8 comments and join the conversation 

Crazy Fast PCB Manufacturing

I finished the redesign for my Apple IIc Drive Switcher PCB on Monday morning, and submitted the Gerber files to Elecrow on Monday at 11:25 am. Friday at 5:20 pm I held the finished PCBs in my hands. Only 4 days for manufacturing and delivery. From China. According to the tracking info, my package took just 17 hours to travel from Elecrow’s Shenzhen facility to my doorstep in California. Total cost for everything was a mere $29.96.

We live in a crazy world, where a completely custom and intricate item can be manufactured on the other side of the planet and delivered to my door in 4 days, for the cost of a pizza and beer. Thank you Elecrow!

Read 1 comment and join the conversation 

Atmel-ICE + Floppy Emu. The Taste of Sadness

In all the time I’ve been developing Floppy Emu, can you believe I’ve never had a debugger? I use the Atmel AVRISP mkII to program the Emu’s ATMEGA1284 microcontroller, but the mkII only supports programming and not debugging. I don’t have a serial port or other logging mechanism either, so when I encounter mystery problems, I need to debuga them by displaying one-time error codes or blinking an LED. Not very convenient.

Last week I finally purchased a proper debugger – the Atmel-ICE. It supports both JTAG and debugWIRE debugging, which are the two interfaces used in the ATMEGA family of microcontrollers. The Atmel-ICE also supports Atmel’s SAM family of 32-bit ARM microcontrollers, which will be handy if/when I start working with those.

With excitement, I connected the Atmel-ICE to Floppy Emu’s 6-pin ISP port. I was able to communicate with the ATMEGA1284 chip, view its fuses, and reprogram it. Visions of breakpoints and memory dumps danced in my head – this was going to be awesome. Then I started a debug session, only to encounter this error:

ISP on Atmel-ICE does not support debugging. Device is only programmed.

Use Start Without Debugging to avoid this message.

Wha-wha-what? Does not support debugging??

I searched in vain for a debugWIRE option, but the only interface choices displayed by the software were ISP and JTAG. After a frantic documentation search I discovered the sad truth. I’d thought the ATMEGA1284 had both debugWIRE and JTAG debugging interfaces, but the presence of JTAG actually means the chip doesn’t support debugWIRE. Even though the necessary debugWIRE pins are right there, and debugWIRE is mentioned in the chip’s datasheet, and there’s a debugWIRE enable fuse on the chip. Ugh.

I confirmed that a test project configured for the ATMEGA328P causes the debugWIRE option to appear in the Atmel Studio software. But Floppy Emu doesn’t have a ATMEGA328P. As soon as I change the project’s device type to ATMEGA1284, the debugWIRE option disappears.

Unfortunately I can’t use the JTAG interface, because I didn’t plan for it, and there’s no JTAG connector on the Floppy Emu board. I can’t do micro-soldering surgery to create a JTAG port either, because the ATMEGA1284 JTAG pins are connected to the board’s Xilinx CPLD chip, to support runtime reprogramming of the CPLD.

I’ll continue debugging with a blinking LED. Debuggers are for wimps. 🙂

Read 2 comments and join the conversation 

Quest For a Simple JTAG SVF Player

Where are all the cheap and simple third-party JTAG programmers? My work on the Yellowstone disk controller was sidetracked by issues with the Lattice-clone JTAG programmer, and discussion about ways that future Yellowstone customers might do JTAG programming. Eventually I realized that any generic JTAG hardware can program Yellowstone’s Lattice FPGA using a standard SVF file generated by the Lattice software. But what JTAG programming hardware to use, and what software to control it?

The device I’m imagining needs to play SVF files on a JTAG interface and verify the result. That’s all. It doesn’t need to handle JTAG debugging, ISP programming, SPI, or anything else. It should have easy cross-platform software to control the device: a basic GUI where the user can select an SVF file and click a button, then see progress info and a final good/bad result. Since JTAG is a fairly simple serial interface, this shouldn’t be much more difficult than building something like a USB-to-serial adapter, along with software to control it, which could leverage something like lib(X)SVF for SVF parsing. I confidently assumed there would be many such devices available for sale.

I was wrong. My search found nothing that really fit the requirement of cheap and easy, though a few options came close. Here’s a summary of what I found. If I’ve overlooked something obvious, please leave me a note in the comments.

Bus Blaster – A $35 open source JTAG tool based on the FT2232H. The hardware fits the bill, but the software does not. The Bus Blaster seems mainly intended to be used with UrJTAG or OpenOCD software, which are both unfortunately the opposite of user-friendly. From what I can tell, neither one has an official binary distribution and must be compiled from source code. From there it’s a somewhat daunting sequence of command line work to get things done. Surely there’s a simpler option if all that’s needed is an SVF player?

Flash Cat – A $30 commercial JTAG device that also does SPI, I2C, and serial. It’s not clear what IC it uses. This might almost work, and the software looks promising, but it’s Windows-only.

TinyFPGA Programmer – A $9 open source programmer for the TinyFPGA product line, and based on a PIC microcontroller. This is the best solution I’ve found so far, and the software is simple and cross-platform. But the hardware is specifically designed for the TinyFPGA boards, and so it makes assumptions about voltage levels and JTAG pinouts, and may have other limitations that prevent it from being a general-purpose programmer. The software is a python module and the GUI is Tk. Beggars can’t be choosers, especially for only $9, but a native control program for Mac and Windows would be preferred.

Those were the only decent commercial options I could find. Everything else was either a platform-specific tool, or a more complex and expensive JTAG debugger like the Segger J-Link, or somebody’s one-off hobby project.

Am I blind, and overlooking an obvious solution? If not, why doesn’t something like this already exist? Is it more complicated than I imagine? Or is there simply no demand for it? It sure looks like you could build a device like this using any simple microcontroller or USB-to-serial converter. Then spend a little time creating native apps for Mac and Windows, and get everything lined up regarding necessary drivers and software signatures, and you’d have a very desirable product. I have half a mind to drop everything and go do this myself.

Read 8 comments and join the conversation 

Zeroplus LAP-C 322000 Logic Analyzer Review

Who needs a logic analyzer? You do! If you enjoy working with electronics, a good logic analyzer is an indispensable tool. It lets you examine exactly what’s happening with your data signals, so you can troubleshoot problems quickly or reverse-engineer unknown protocols. Developing digital electronics without a logic analyzer is like navigating without a map.

In decades past, logic analyzers were bulky and expensive purpose-built machines from companies like Hewlett Packard. There’s still a place for such high-end equipment, but today most engineers can get everything they need from a small USB device that costs just a few hundred dollars. Saleae is one of the better-known names here, but it’s a crowded market with many worthy competitors. Which one is right for you?

 
Hello Zeroplus

Today I’ll take a look at an example of the Zeroplus “Logic Cube” series. Zeroplus is based in Taiwan, and their logic analyzer products have a good reputation among Taiwanese IT companies, but are less well known in the West. Full disclosure: Zeroplus sent me a demo unit for the purpose of this review. What you’ll read below are my own unfiltered opinions of the hardware’s strengths and weaknesses.

The members of this product family are buffered logic analyzers, with a few megabits of high speed on-board memory for storing the captured sample data. This is the more traditional approach to designing a logic analyzer, and it provides some advantages versus the on-the-fly USB streaming approach used by Saleae and other similar low-cost logic analyzers. Buffered LAs aren’t reliant on the PC’s USB bandwidth, so they can capture many channels at high sample rates without fear of saturating the USB link and losing data. The tradeoff is that buffered LAs have finite storage space, so the maximum amount of data that can be captured is limited. Data compression helps, but once the on-board memory is full, signal acquisition stops. In contrast, streaming LAs support “infinite” capture sizes limited only by the PC’s available RAM and disk space, but have limited bandwidth.

The Zeroplus logic analyzers in this Logic Cube series are commonly called “LAP-C” for the common prefix in all of their model numbers, and you’ll find the most pertinent resources on the web by searching for this term. LAP-C models are sold in seven different versions, differing mainly in the number of channels and amount of on-board memory. For this review, I tested the top of the line LAP-C 322000, with a retail price of $1699 in the United States. Here are the specs:

Logic Analyzer: Zeroplus LAP-C 322000
Number of Channels: 32
Sample Memory: 64 Mbits (2 Mbits per channel)
Max Sample Rate: 200 MHz
Vinput of Testing Channels: -30V to +30V DC
Retail Price: USD $1699

In comparison, the mid-range members of the LAP-C family pare back the number of channels and memory size in exchange for lower prices. For example, LAP-C 16064 is 16 channels / 1 Mbit / 100 MHz for $249, and LAP-C 16128 is 16 channels / 4 Mbits / 200 MHz for $399.

 
Unboxing and Hardware

While the front of the box has English text, the technical bullet points on the back are primarily in Chinese. This probably doesn’t matter much, since very few people will ever be browsing store shelves when shopping for a logic analyzer, but it shows that Zeroplus’ marketing team has additional translation work to do if they hope to attract more Western customers.

The 34-page instruction manual inside the box is also Chinese, but a well-written English-language manual can be downloaded here.

The logic analyzer unit and its accessories all fit inside a hardshell zippered case that’s included. While it’s a small thing, this is a very nice touch, and it makes it easy to pack up your project when you’re finished debugging hardware. The pulse width trigger module included with the LAP-C 322000 model is the only accessory that doesn’t go in the hardshell case.

The LAP-C 322000 package includes:

  • logic analyzer main unit
  • USB cable
  • 32 flying lead wires
  • 36 IC test hooks
  • printed manual (Chinese)
  • installation software CD
  • hardshell case
  • pulse width trigger module
  • hookup wires for trigger module

The pulse width trigger module is made from brushed aluminum. When connected to the main unit, this module upgrades the logic analyzer with a new feature: the ability to trigger on a high or low pulse whose duration is outside a user-selected range. This may be useful when working on PWM applications like dimmable lighting.

The main logic analyzer unit is made from an attractive gloss white plastic. The external interface is minimal: a button to start immediate captures, and status lights for Run, Read, and Trigger. The external connector is two rows of standard 0.1 inch male headers. In addition to the 32 data channel connections and ground pins, there’s also an external clock input, and a few special I/O pins for connecting the pulse width trigger module or a second LAP-C unit.

The LAP-C does not offer any analog measurement capability. While serious analog work demands an oscilloscope, some newer logic analyzers are starting to add low-bandwidth analog capability too. This can be useful for verifying that signal voltage levels are in a valid range, or that power supplies are behaving as expected. Analog measurement is a nice extra feature, but its absence here isn’t a deal-breaker.

 
Software

The LAP-C software is Windows-only. That’s not a big limitation, since any halfway-serious engineer will have access to a Windows machine even if it’s not their main system, but Macintosh and Linux software versions would have been nice. Windows 7, 8, and 10 compatibility is advertised. I used software V3.13.05 with Windows 7. You can download the software and try it out in demo mode, even if you don’t have the LAP-C hardware. The English-language software translations are generally well done and easy to understand.

The graphical interface appears geared towards advanced users, with a very large number of tools, measurements, and options visible directly on the main screen. This is great for power users, but may be overwhelming for newbies, or to anyone accustomed to the minimal interface of Saleae’s software. I found it a bit daunting at first, but after 30 minutes of experimenting and checking the manual, I had mastered the basics.

Some aspects of the UI would benefit from more polish. When viewing large data ranges, panning and zooming the waveform can be glitchy, with a laggy response to mouse input and different portions of the waveform scrolling a fraction of a second after others. Compared to the smooth pan and zoom of competing logic analyzer software, or to tools like Google Maps, that’s a little disappointing. Other minor graphical glitches and goofs sometimes appear, like waveforms that aren’t a uniform height or scale values that are the wrong units. The intervals of the ruler are also oddly chosen, with tickmarks at 0.409374912 seconds followed by 1.91042 seconds and then 3.411465 seconds, instead of divisions at even intervals using powers of 10. None of this prevents you from analyzing captured sample data, but it can be confusing and distracting.

A data capture can be started immediately by pressing a button in the software, or by pressing the physical button on the logic analyzer unit. Captures can also be triggered by a level or edge condition on a single channel, or by more complex functions like a specific value on a parallel bus, or a protocol-level event like an I2C NACK.

There are an impressive number of powerful features embedded in the LAP-C software. Repetitive run triggers the logic analyzer over and over, so you can look for one instance that’s different from the others. Noise filter ignores pulses with a duration below the threshold you specify. Data can be captured with an asynchronous internal clock, or synchronously using an external clock connection. Captured data can be searched in several different ways to find a region of interest. Waveforms and data can be imported and exported for additional examination with other tools. There are many more advanced analysis features that I didn’t have time to explore deeply. Some of these may only be valuable in specific circumstances, like the memory analyzer for examining address/data information, but they’re great when they provide the special capabilities you need.

One small but curious shortcoming: in the software’s list of choices for the sampling rate, it skips directly from 1 MHz to 10 MHz with no intermediate options. Most of my electronics projects operate a speeds faster than 1 MHz but slower than 10 MHz, so the missing sample rate options are a bigger issue for me than they might be for others.

Capturing and viewing sets of raw waveforms is great, and is the core of logic analyzer usage. But to really get the most out of the tool, you’ll want to use the software’s protocol analyzers to provide a high-level representation of the data. A protocol analyzer is a software module that converts raw waveforms into decoded data packets. For example, if the software offers an I2C protocol analyzer, then you can view the captured sample data as a series of device addresses, data bytes, and ACKs, instead of counting raw transitions of the SCK and SCL signals while you thumb through an I2C reference guide. The LAP-C software offers over 100 protocol analyzers, including I2C, SPI, serial (RS-232C/422/485), JTAG, PS/2, USB, 1-WIRE, CAN, IRDA, 3-WIRE, and many more. Some of these used to be extra cost options, but since 2016 all the protocol analyzers are available free. Here’s a typical protocol analyzer example from Zeroplus:

 
How Much Sample Memory Do You Need?

When purchasing a buffered logic analyzer, you need to predict in advance how much sample memory you’re likely to need. Underestimating is bad: if there’s not enough sample memory, you won’t be able to capture as many channels for as long a duration as needed for your debugging work. But overestimating is bad too: it means you’ll pay hundreds of dollars extra for additional sample memory you don’t need. So how can you decide how much memory is enough?

This is a difficult question to answer, because it depends heavily on the type of electronics work that you do. You might work primarily with low speed serial signals, or you might need to analyze a wide parallel bus running at high speeds. Your data might be highly compressible with the LAP-C’s compression algorithm, or it might not. This leads to a frustrating “it depends” answer that satisfies no one.

For my own hobby electronics work over the past few years, I typically need to analyze systems with 5 to 10 signals running at speeds between 1 MHz and 10 MHz. My normal work pattern with a streaming logic analyzer is to manually start the capture, and record a few seconds of data, then examine it afterwards to zoom in on the interesting parts. Using a buffered logic analyzer, without the benefit of compression, this would require a worst case of 10 signals times 10 MHz times 10 seconds, or 1000 Mbits total. My simple experiments with LAP-C compression found that it provided about a 10x improvement for my data, so that reduces the memory requirement to 100 Mbits – close enough to the LAP-C 322000’s 64 Mbits. So with the top of the line LAP-C model, I could probably support my existing work pattern, but anything with less memory would be insufficient.

To really get the benefit of the LAP-C hardware, I would need to stop using 10 second long captures, and instead set up triggers to capture only the few hundred milliseconds or microseconds I’m really interested in. That would require some extra time for setup, but would greatly reduce the amount of memory needed.

 
Other Thoughts

Zeroplus’ web site lists two official US distributors, and one appears to only stock the entry level LAP-C 16032, leaving a single source for most LAP-C models in the US. With a bit of Google searching, you can also find third-party LAP-C sellers from Asia who will ship directly to the US. Direct sales from the Zeroplus web site would be a welcome addition, but are not currently an option.

Is the Zeroplus LAP-C series right for you? It depends on what you plan to do with it. For the casual electronics hobbyists who are typical readers of this blog, the $1699 price tag of the LAP-C 322000 may be more than they’re willing to spend, but they probably don’t need all the features of the LAP-C 322000 either. Zeroplus offers three LAP-C models with slightly lower specs all priced below $400, which are probably a better fit for part-time garage hackers. I expect that most hobbyists will be fine stepping down from 32 to 16 channels, although stepping down from 64 Mbits of memory to 4 Mbits or 1 Mbit may be a concern.

With 4 Mbits and 16 channels, at a 10 MHz sample rate and using compression, you’ll get a capture window of a couple hundred milliseconds. That’s small enough so that manual triggering is unrealistic. Instead, you’ll need to plan ahead of time to decide what your goal is with this capture, and how you can construct a trigger to capture exactly the region you need to examine. For power users working with high speed systems and large numbers of channels, this will already be familiar. They’re more likely to know specifically what they’re looking for, and to see complex trigger setup as a normal and expected part of debugging. They’re also more likely to benefit from some of the LAP-C’s advanced features and hardware options like external clock input, pulse width triggering, and detailed software analysis. The breadth and depth of the LAP-C software’s various tools and analyzers is impressive, and will surely be a boon for hardcore electronics engineers. Beginners and others with more basic needs may be better off with an 8-channel streaming logic analyzer with a simple and friendly software interface.

What about the general merits of buffered vs streaming logic analyzer designs? For situations where there’s enough USB bandwidth to stream the sample data on the fly, I believe that approach provides a better user experience. There’s no need to worry about trading off sample rate for sample depth, or worrying whether the capture window will be large enough to contain all the events of interest. But for more demanding situations with large numbers of channels and high sample rates, streaming just doesn’t work. 32 channels at 200 Mhz sampling rate is 6.4 Gbits/sec, which is more than USB 3.0’s total maximum theoretical bandwidth and far more than its real-world bandwidth. So depending on the application, both streaming and buffered designs have their place.

My ideal logic analyzer would be one that operates in streaming mode where sufficient USB bandwidth is available, and falls back to buffered operation when streaming won’t work. That would probably be needlessly complex and expensive, but I can dream!

Read 4 comments and join the conversation 

Older Posts »