BMOW title
Floppy Emu banner

Call For Yellowstone Beta Testers

Here we go! Beta testing kicks off today for the Yellowstone universal disk controller for Apple II. Yellowstone is a replacement for the Liron disk controller, the Apple 3.5 disk controller, and the standard Disk II controller, all rolled into one. It even handles Macintosh disk drives. After a soldering marathon this week, I now have five working Yellowstone boards and three working Yellowstone testers. All the testers work with all the boards. The ducks are finally in a row. Let’s do this.

Before jumping into beta testing specifics, some readers may be curious to know how the Yellowstone tester turned out. The goal of the tester is to program and verify a newly-assembled Yellowstone board, as quickly and easily as possible, without requiring any other equipment. The tester development proved to be a major project in its own right, and occupied most of my available time for the past couple of months, but I’m happy to say it’s finally working. Here’s a video demonstrating the tester operation:

 
Calling Beta Testers

I need your help! I’ve tested the hell out of Yellowstone, but I won’t be truly confident it’s ready until it’s passed through other people’s hands, with other types of computers and disk drives, and other software environments. That’s where you come in.

The ideal Yellowstone beta tester will have:

  • at least 5-10 hours available for testing in the next two weeks
  • a personality that enjoys making lists, spreadsheets, and similar record-keeping
  • a variety of Apple II computers and disk drives, especially an Apple II+, Duo Disk, Unidisk 5.25, Apple SuperDrive (FDHD Drive G7287), third-party 3.5 inch drives from Applied Engineering / Chinon / Laser / American Micro Research, and CPU accelerator cards.

I need to be slightly picky about who gets these beta cards, since I only have a few of them. The cards should go to people who are in the best position to help with testing, either by virtue of the hardware they have or the energy they’re willing to put into methodical testing. If this appeals to you, please get in touch with me using the Contact link at upper-right of the page, and let’s talk! If you’re more interested in Yellowstone for personal use and you aren’t sure you’ll have time or energy for beta testing, you’ll have an opportunity to get one soon when they become generally available.

 
What You’ll Get

The draft instruction manual for Yellowstone offers a good overview of its capabilities. In brief, any type of Apple II or Macintosh floppy drive from the 1970s – 1990s should work with Yellowstone, as well as intelligent Smartport-based hard disks such as Floppy Emu’s Smartport HD emulation mode. It supports two drives of different types connected at the same time, or up to five drives for intelligent Smartport devices. Drives with 19-pin D-SUB DB-19 connectors and 20-pin rectangular connectors are both supported.

For the beta testers, I will also be including a pair of DB-19 female adapters. These will probably be a separate accessory when Yellowstone goes on sale, since not everyone will need them, and the DB-19 female connectors are rare and somewhat expensive. You’ll need these DB-19F adapters when connecting drives like the Duo Disk, Apple 3.5, or Macintosh M0131. The adapters aren’t needed when connecting a Floppy Emu, a Disk II, or internal / bare drive mechanisms using a ribbon cable with a 20-pin rectangular connector. The wiring of the DB-19F adapters is designed only for use with Yellowstone, so don’t try to connect them anywhere else.

I think that’s everything, and I’m very excited to be launching the Yellowstone beta today. Thanks in advance for your help, and don’t hesitate to contact me if you’re interested in being a beta tester.

Read 8 comments and join the conversation 

ROM-inator Resurrections

Good news! Kay Koba at Kero’s Mac Mods store is now selling pre-packaged ROM-inator kits for the Macintosh Plus, Mac 512Ke, 512K, and 128K. This is a recreation of the original Mac ROM-inator kit that I designed in 2015. The design is open source, and Kay’s new version is called “ROM-inator Resurrections”.

The ROM-inator replaces the stock 64K or 128K of ROM in a compact Macintosh with a full 1 MB of flash memory. Once installed, the flash ROM’s contents can be updated via software from within the running Macintosh, allowing for extensive customization. The replacement ROM adds a bootable ROM disk to your Macintosh, provides built-in HD20 disk support, replaces the startup sound, changes the Happy Mac icon, and makes it possible to edit the ROM disk or even tweak the ROM toolbox code code.

  • Startup beep is replaced by a glass “ping”
  • Happy Mac icon is replaced by a Mac wearing sunglasses
  • Pirate icon is displayed while waiting to load the ROM disk
  • ROM disk image including System 6, Mac Write, Mac Draft, and eight games
  • 128K ROM code turns a Mac 128K or 512K into a 128Ke or 512Ke

The ROM-inator is a descendant of Rob Braun’s original Mac Plus ROM Adapter and disk driver. More details about its inspiration and development are here.

When first powered on, the Macintosh will play a customized startup sound, and display a “pirate Macintosh” icon. To boot from the ROM disk, press and hold the R key on the keyboard for a few seconds. If R is not pressed, the Macintosh will boot normally from an attached SCSI disk, or wait for a floppy disk to be inserted.

plusromscreen-480

The utility program Flash Tool can update the flash ROM from within the running Mac. Alternatively, the flash chips can be removed from their sockets and reprogrammed using a standard EPROM programmer.

Flash Tool

You can buy a ROM-inator Resurrections kit from Kero’s Mac Mods store. Please refer to their store with any questions or tech support needs; BMOW does not provide any support for these.

Happy ROM hacking!

Read 1 comment and join the conversation 

Zener Regulator Trouble

In my attempts to resolve some logic level problems with the Yellowstone Tester, I decided to take a crude approach, and reduce the 5V supply to about 4.7V using a simple zener diode voltage regulator. From my previous tests with a variable voltage supply, the reduced voltage at 4.7V was enough to get the tester working nicely. But now when I try combining a fixed 5V supply with a resistor and a 4.7V zener, I find that it doesn’t work as expected. Zener regulators apparently don’t behave the way I thought they did, and for this specific circuit, they may not work at all.

The problem is that the rated voltage of a zener diode is only valid for a specific level of current. I knew this, but I thought the voltage would only change a small amount over a wide range of current: perhaps 100 mV of change for the 10 mA to 100 mA range where my circuit operates. In other words, I thought the IV curve for the zener would be very steep, nearly a vertical line.

This proved to be a bad assumption. Using a 1N4732A zener, at a current of 90 mA I do see 4.7V on the zener, but the voltage is only 4.506V at 40 mA, 4.459V at 30 mA, 4.335V at 20 mA, and 4.212V at 10 mA. That’s not good. It’s non-linear, but if it were a resistor then its value would be about 6.1 ohms. I’m not sure how I was meant to know this from the datasheet, which doesn’t include any IV curves. The datasheet does include a dynamic resistance (impedance?) number, but I thought that was for AC applications since it’s specified for a frequency of 1 kHz.

 
Voltage Regulator Says What?

So how are you supposed to build a voltage regulator using a zener with this amount voltage of variability? The standard way to build a zener regulator is with a resistor in series with a zener, and a load in parallel with the zener:

If Vout is ever higher than VZ, then the zener will pass more current, which also increases the current through the resistor, which increases the voltage drop across the resistor and lowers Vout. So Vout gets “regulated” to the value of VZ, so long as the current through the zener remains constant.

If the circuit has a fixed load, then the value of the resistor can be chosen to get the desired current through the zener to achieve the nominal zener voltage. But if the circuit has a variable load, there’s a problem. In order to maintain a regulated output voltage, the voltage drop across the resistor must remain unchanged while the load current changes. That means the current through the resistor must also remain unchanged. The only thing that will change is how much current flows through the load and how much flows through the zener. If the load draws less current, then the zener must draw more, in order to keep the total current constant.

Let’s say the load current can vary between 10 and 100 mA, and the zener can theoretically handle 150 mA before it burns up. For safety’s sake you probably don’t want to push the zener all the way to 150 mA though, so you might limit it to 120 mA. You could choose a resistor value to achieve a constant current of 130 mA through the resistor, which can be divided between the zener and the load 120/10 mA for light loads anywhere up to 30/100 mA for heavy loads. That’s a 90 mA range of currents through the zener, so for a 6.1 ohm equivalent resistance you would see a 0.549V change in the output voltage. That’s not very well regulated at all.

If I’ve analyzed this all correctly, then a zener voltage regulator basically doesn’t work unless the load current never varies by more than about 10 to 20 mA. That’s a pretty lousy regulator. I’m not sure how I never realized this about zener regulators before.

 
Plan C

So now what? I still need to get the Yellowstone Tester working. The original 5V circuit has problems with the logic levels, and Plan B for reducing the supply voltage to 4.7V with a zener seemingly doesn’t work either. Would I be better off with a simple Schottky diode in series with the 5V supply input? This wouldn’t be perfect either, since the nominal 5V supply might actually be closer to 5.1V or 4.9V, and the voltage drop across the diode will change with the load current. But I still think it would be enough to keep the voltage within a range of about 4.8V to 4.5V, which is better than I’ll get with the zener.

Read 7 comments and join the conversation 

5V Logic Level Errors

The Yellowstone tester is suffering from the logic level blues. I made a blundering error in its design, which I only discovered now. Like most of my projects, the tester uses a mix of voltages, with the MCP23S17 port expanders running at 5V while the STM32 microcontroller and everything else are running at 3.3V. The card being tested is nominally a 5V device, but it uses TTL signal levels where anything above 2V is considered a logical “1” value. This whole menagerie seemingly worked fine during the tester’s prototyping and development, but it actually has some major problems.

The relevant pins on the STM32 are 5V-tolerant, so that part is fine. Unfortunately I failed to check the input voltage thresholds on the MCP23S17. Now I see that it requires a voltage of at least 0.8 * VDD to detect a logic “1” input value, which means a threshold of 4.0V when VDD is 5V. The STM32 signals are never going to exceed 3.3V, so that’s no good. The so-called 5V signals from the card being tested mostly won’t reach 4.0V either. Some of those signals are driven by 74LS logic with typical high values about 3.4V, and others by 3.3V 74LVC devices.

I’m slightly amazed that I completed the tester prototype, tester PCB, and development of all the tester software without discovering this glaring problem. In fact just yesterday I’d privately declared the tester to be “done”. It was working well, able to program a virgin Yellowstone card and run a large series of functional and electrical tests in just a few seconds’ time. Aside from some rare flakiness I couldn’t reproduce, everything looked good. But then I tried using a different power supply and everything fell apart.

 
How Did This Ever Work?

With the aid of an adjustable power supply, I eventually found that the tester worked reliably at supply voltages up to 5.05 volts. Above that, the STM32 was unable to communicate with the MCP23S17 port expanders. Some of my other power supplies produce about 5.08 to 5.1 volts with a light load, so that explains why they didn’t work. With a higher VDD, the logic “1” input threshold of the MCP23S17 is also higher, creating a larger shortfall for 3.3V or 5V-TTL signals.

It’s surprising that this ever worked at all. Even at precisely 5.0V, the best case would be a 3.3V signal from the STM32 going into a MCP23S17 input with a 4.0V threshold. A shortfall of 0.7V is pretty large. And yet it did work nicely, at least in this particular circuit, over several weeks of tester development.

 
Choosing a Voltage

The MCP23S17 port expander can operate with a supply voltage of 5V or 3.3V; I intentionally chose 5V here because the chip interfaces with a card being tested that’s nominally a 5V device, and I was worried about damaging the MCP23S17. Even though most of the output signals from the card should be lower-voltage TTL-level logic signals, at least some of the signals may truly be 5.0 volts. And if the card is defective, which is part of what the tester needs to determine, then any of the card’s signals could be unexpectedly at 5.0V. A chip that’s running at 3.3V can be damaged if 5V is applied to an input pin. If the MCP23S17 has 5V-tolerant inputs then it would be OK, but sadly it doesn’t, so I need to run the chip at 5V to safely read 5V signals. (Probably. See below.)

You can view the MCP23S17 datasheet for the details. In the section for Absolute Maximum Ratings, it says the voltage on all pins should not exceed VDD + 0.6V, and the input clamp current when Vi > VDD should not exceed 20 mA.

 
How Do I Fix This Mess?

The tester isn’t a product, it’s a tool for developing a product – the Yellowstone disk controller card. I only plan to build three or four testers, and they’ll all be in my possession, or given to whatever PCB assembler I choose to work with. That means I can consider some unconventional fixes here that I would never do on a mass-produced product. If at all possible, I’d like to fix this with some minor surgery to the tester PCB. I really don’t want to design a new tester PCB, add more components and level-shifters, etc. But I need to be confident that the tester is reliable, and if I give one to a PCB assembler, it can’t flake out or give false positives due to minor variations in supply voltage, temperature, or parts substitution on the card being tested.

The fundamental problem is the gap between the 4.0V logic “1” voltage threshold required by MCP23S17, and the lower voltages produced by the STM32 and the card being tested. I either need to raise those voltages up, or lower the MCP23S17 threshold down. None of the possibilities look very promising.

Raising up the lower voltages is basically out of the question. They are what they are, and it would be painful and impractical to insert 5V level shifters on 80-some signals from the card and the STM32.

Lowering the threshold is the only plausible option, which means lowering the supply voltage of the MCP23S17. How much should I lower it, using what method? What other problems might this cause? Should I also lower the supply voltage for the card being tested?

I could lower the MCP23S17 threshold by changing the chip to use a 3.3V supply instead of 5V. But then I would create a new problem where 5V signals might be applied to the chip running at 3.3V, exceeding the absolute maximum rating of VDD + 0.6V and potentially damaging the chip. I could possibly put a series resistor on each of the pins, in order to keep the input clamp current under 20 mA, although this would create some other difficulties and would require making a new PCB.

If I also reduced the supply voltage for the card being tested to 3.3V, it would eliminate the over-voltage concern, but a Yellowstone card won’t work correctly at 3.3V. It has a 74LS244 chip, with a minimum supply voltage of 4.5V.

 
Diode To The Rescue?

There are no great answers here, but the most promising option I can think of is reducing the supply voltage for the MCP23S17 and for the card being tested to about 4.6V. Using the same voltage for both means there’s no over-voltage risk. At 4.6V the Yellowstone card should still work OK. The reduced voltage would lower the MCP23S17 input threshold to 3.68V, which is still too high, but my tests with the actual hardware show that it should work anyway. The hack-tastic way to get about 4.6V from a 5V source would be using a Schottky or germanium diode to drop a few tenths of a volt. This would be fairly simple to do.

Alternatively I could run the card at 4.6V, but lower the MCP23S17 supply even further to 4.2V, using a second diode. This would still be within the MCP23S17’s absolute maximum rating of VDD + 0.6V, so there shouldn’t be an over-voltage risk so long as the voltages don’t vary too far from their expected values. With a 4.2V supply, the MCP23S17 input threshold would be reduced to 3.36V, which is close enough to 3.3V that I’d be more confident in its reliability.

Complicating all of this is the presence of a 1 ohm sense resistor on the tester’s power supply, which is used for current measurements. Under normal operation, this drops about 50 to 100 mV. If the power supply is nominally 5V, then the MCP23S17 and the card being tested will see a voltage closer to 4.9V.

Another complication is that the 74LS244 on the card being tested will output 5V TTL signal levels, and according to its datasheet, its logical “1” output voltage may be as low as 2.4V. I can’t reduce the MCP23S17 supply voltage far enough to support an input threshold that low. But in practice for a correctly-operating Yellowstone card, and the tester circuitry that’s connected to it, the 74LS244 voltage should be closer to 3.0 – 3.5V, similar to the voltages from the 3.3V STM32.

Using a diode to create intermediate supply voltages is a gross hack, and I don’t like it. It might work during my desk experiments, but then fail later in a different environment. And yet I’m not sure what better alternative I have, unless I’m willing to make major modifications to the tester that will set me back by several weeks. The choices all look bad. I’m embarrassed that I made it this far into tester development without noticing such a fundamental design error.

Read 5 comments and join the conversation 

The Yellowstone Tester

I once said I’d spend a maximum of one day on development of an automated tester for the Yellowstone disk controller. Haha, that was so cute. Today marks two months since I began work on the tester. Today was also the first try with the final (I hope) tester hardware. So far it seems to work, mostly.

Yellowstone is an FPGA-based disk controller for Apple II computers. It’s complex enough that fully testing each board is a non-trivial task. Manually testing a large batch of boards would be out of the question, so an automated tester is needed. The heart of the tester design is an STM32 “Blue Pill” board, combined with four Microchip port expander chips to reach a total of 80 I/Os. There are also a few analog sensors for measuring current and voltage, as well as a current switch IC that will disconnect the board being tested if it draws too much current.

 
Unexplained Current Changes

The ability to measure the supply current was one of the key features of the tester design. Unfortunately it doesn’t seem to work as well as I’d hoped. I was convinced that I needed to measure the combined supply current of the board being tested and the tester itself, in order to capture all possible paths where a short circuit might occur, and that works. So far, so good.

The odd thing is there are unexplained fluctuations in the measured current. With no Yellowstone board present, the current sensor reports about 26 mA used by the tester itself. But if I write some code that sits in a loop repeatedly measuring the current, but doing nothing else, sometimes the current briefly jumps up to 35 or 40 mA. This happens roughly once every second, but not consistently enough to make a reliable pattern. With a Yellowstone board present, the current is higher but similar current measurement fluctuations are still happening. At first I thought this was some deficiency in the STM32 ADC, but other analog measurements by the STM32 don’t have the same issue.

I’m not sure if these current changes are real, perhaps caused by some internal activity of the STM32 briefly increasing the supply current, or if the changes are somehow an artifact of how I’m measuring. Either way, the fluctuations are large enough to undermine most of what I’d planned to use the current measurements for. A measure of 70 mA +/- 25 mA isn’t accurate enough for much diagnosis beyond detecting a hard short-circuit.

 
Unresponsive ICs

A second problem I encountered is that the Microchip port expander ICs occasionally don’t respond to SPI commands. This often happens when I first turn on the tester after it’s been off for several minutes, but never happens when turning it briefly off and on, or resetting the tester while keeping the power on.

Surprisingly (or maybe it’s not surprising) there seems to be a relationship between the current fluctuations and the unresponsive port expanders. After the board has been off for several minutes, and is then turned on, I’ll very often see an immediate current fluctuation followed by unresponsive port expanders. I’ve added some code when the tester starts up that will repeatedly poll the port expanders until they respond as expected, and this appears to be working for now.

The tester PCB isn’t well designed for probing internal signals to see what’s wrong. Because I already built a breadboard prototype of the tester previously, and thought it was working, this new PCB was designed for small size rather than for debugging. It may require some fancy soldering and old-fashioned detective work to figure out what’s happening here.

Read 6 comments and join the conversation 

WordPress Latin1 and UTF-8, Part 2

Yesterday I wrote about some BMOW blog troubles displaying special characters and international characters, which was apparently triggered by a recent update to MySQL 8 at my web host. Old pages containing special characters like curly quotes, accented letters, or non-Latin characters were suddenly rendering as garbled combinations of random-looking symbols, whereas they were previously OK. If you read the follow-up comments, you saw that I was eventually able to resolve the problem (mostly) by adding these lines to my wp-config.php file:

define(‘DB_CHARSET’, ‘latin1’);
define(‘DB_COLLATE’, ”);

But I didn’t fully understand what those lines changed, or exactly why this problem appeared in the first place. After some digging in the MySQL database, I think I have a slightly better understanding now.

 
Back to Kristian Möller

I returned to the example of Kristian Möller, whose name contains the letter o with umlaut. After the MySQL update, the name was appearing incorrectly as Möller. This is what you’d expect to see if the UTF-8 bytes 0xC3 0xB6 for ö were incorrectly interpreted as two separate Latin1 bytes, 0xC3 for à and 0xB6 for ¶.

Using phpmyadmin, I was able to connect to the live WordPress DB, and examine the wp_comments table where this name is stored. The result for comment_id 233746 is shown above, displaying the author’s name as both text and as raw hex bytes. You can see the hex bytes contain the sequence C3B6, which is the correct UTF-8 byte sequence for ö. That’s great news. It means the contents of my database text are correct and uncorrupted UTF-8 bytes. But all is not well – the metadata associated with the table is wrong. It thinks the text is Latin1, and displays it as such in the myphpadmin UI. I was able to confirm this by executing the SQL command:

show create table wp_comments

This echoes back the SQL command that was originally used to create this table, way back in 2007. And lo and behold, part of that original SQL command specified CHARSET=latin1. Ever since then, WordPress has been storing and retrieving UTF-8 text into a Latin1 table in the database. This is bad practice, but it worked fine for 14 years until the MySQL update earlier this month.

 
Why Does DB_CHARSET latin1 Help?

Defining WordPress’ DB_CHARSET variable to be latin1 sounds like it’s telling WordPress what type of character set is used by the database. But if you think it through, that doesn’t fit the evidence here. If I tell WordPress that my DB data is in Latin1 format, even though as we’ve seen it’s really UTF-8, then I would expect WordPress to convert the data bytes from Latin1 to UTF-8 as it loads them during a page render. That would do exactly the wrong thing; it would cause the very problem that I’m trying to prevent.

I searched for a detailed explanation of precisely what the DB_CHARSET setting does, but couldn’t find one that made sense to me. Most references just say to change the value, without fully explaining what it does.

While I don’t have any strong evidence to support this, my guess is that a MySQL client has a choice of connecting to the MySQL database in Latin1 mode or UTF-8 mode, and this is what DB_CHARSET controls for WordPress. If the client connects as a UTF-8 client but the table is marked as being Latin1, my guess is MySQL automatically translates the data. Normally that would be a good thing, but if UTF-8 data were stored in a table improperly marked as being Latin1, it would cause unwanted and unnecessary character conversions, causing the types of problems I saw on the blog.

 
Why Did This Break Now?

So what changed during the recent MySQL update to suddenly break this? Why did the problem appear now? Initially I suspected the underlying data bytes had become corrupted during the update, but the hex display from phpmyadmin showed the data bytes are OK.

I can’t say for certain whether the problem was caused by exporting and reimporting my database, or whether it’s due to new behavior in MySQL 8. Now that I think about it, I’m not even certain whether the result I saw from that show create table wp_comments was actually the original SQL command from 2007, or the SQL command from eight days ago when the database was migrated to MySQL 8.

If these database tables were always explicitly marked Latin1, going all the way back to 2007, then I think this character set conversion problem would always have happened too. Or at least it would have happened as soon as I updated to my current version of WordPress, instead of when I updated to a new version of MySQL.

One possibility is that with the old database and old version of MySQL, the character set for the database tables wasn’t explicitly defined. It relied on some database-wide default which just happened to be UTF-8, so everything worked when WordPress connected to the DB as a UTF-8 client. Then during the MySQL 8 update, somehow the tables were explicitly set to Latin1 and the problem appeared.

Another possibility is that the tables were already explicitly Latin1, but WordPress was previously connecting to the database as a Latin1 client, so it worked OK. Since my version of WordPress hasn’t changed recently, this would mean the default database connection type for WordPress must somehow come from the database itself, or the database server, and that’s what changed during the MySQL 8 update.

Whatever the explanation, changing DB_CHARSET now seems like only a temporary solution. I still have UTF-8 data stored in tables that say they’re Latin1, which seems likely to cause more problems down the road. If nothing else, it makes the output display incorrectly in the phpmyadmin UI. A full solution will probably require some more significant database maintenance, but I hope to postpone that for a while.

Read 3 comments and join the conversation 

« Newer PostsOlder Posts »