BMOW title
Floppy Emu banner

Introducing the Mac ROM-inator II

Mac ROM-inator II rominatorii-front-and-back

Say hello to a new retro-computing gizmo – the Mac ROM-inator II! It will supercharge your vintage Macintosh II series or SE/30 computer, by replacing the stock ROM with a programmable flash memory module. Add a bootable ROM disk, change the startup sound, hack the icons, gain HD20 support and a 32-bit clean ROM. Go nuts! The Mac ROM-inator II is available now in the BMOW Store.

The flash ROM comes pre-programmed with a custom ROM image with the following changes as defaults:

  • Customized startup chime – major 9th arpeggio
  • ROM disk image provides a diskless booting option
  • New startup menu screen displays installed RAM, addressing mode, and ROM disk details
  • Built-in HD20 disk support
  • 32-bit clean
  • Memory test is disabled for faster booting
  • Happy Mac icon is replaced by a color smiling “pirate” Mac

The ROM-inator II is derived from Doug Brown’s original Mac ROM SIMM design, used with permission. A portion of sales goes back to Doug.

 
ROM Hacking Magic

All the early Macintosh computers have some low-level functions stored in ROM. It’s the Mac equivalent of a PC’s BIOS. These ROM routines are responsible for initializing the computer when the power is first turned on, checking to see what hardware is installed, finding an attached disk with OS software on it, and booting that OS. Even after the OS has booted, the ROM routines still handle many low-level functions like interrupt handling, keyboard support, and floppy I/O, as well as some higher level functions like drawing windows and icons. If you can control the ROM, you can control virtually everything in the computer.

Nearly every member of the Macintosh II, Quadra, and LC families has a 64-pin ROM SIMM socket on the logic board. In some cases, the stock ROM is in this socket. In others, the stock ROM is soldered directly to the logic board, but it can be overridden by a ROM SIMM in the socket. All that’s necessary is to figure how to build a ROM SIMM that’s physically compatible, and then program it with appropriate software. A few years ago, the important details were reverse-engineered by Doug Brown and others at 68kmla.org in an epic forum thread that stretched to over 1000 posts.

The ROM-inator II is a standard PCB, shaped and sized to fit the ROM SIMM socket. It comes pre-programmed with a base ROM image that’s modeled off the Mac IIsi ROM. This is a universal ROM that’s also capable of working in many other members of the Mac II family, including the SE/30, Mac IIx, IIcx, IIci, and IIfx. By patching key ROM functions, it’s possible to alter the Mac’s behavior in fundamental ways – a new startup chime, support for additional disk types, and a modified Happy Mac being a few examples. All it takes is a 68K disassembler and a lot of patience.

disassembly

The stock ROM in most Macs of this period was around 512K in size, but the Mac’s address map devotes a full 8MB to ROM. In a normal Mac, the rest of this address space is unused. The ROM-inator II uses larger flash memory chips to take better advantage of the available address space. The first 512K of flash memory is used for the actual ROM code, and the rest is available for interesting goodies like a ROM disk image. Rob Braun’s original romdrv paved the way for ROM booting, and I’ve made several enhancements, including a startup menu and support for compressed disk images.

 
Compatibility

The pre-programmed ROM image is compatible with the Macintosh IIx, IIcx, IIci, IIfx, IIsi, and SE/30. The Mac ROM-inator II module is physically compatible with any Macintosh having a 64-pin ROM SIMM socket, except the Quadra 660AV and 880AV. This includes the previously mentioned models as well as many other Quadra, LC, and Performa models. For these other models, the flash memory will need to be reprogrammed with an appropriate ROM image.

For a similar ROM upgrade for the Macintosh Plus, 512Ke, 512K, and 128K, see the original Mac ROM-inator Kit.

 
HD20 Support and 32-Bit Clean ROM

A nice benefit of the pre-programmed ROM image is to add built-in support for HD20-type hard disks, such as the HD20 hard disk emulation mode of the BMOW Floppy Emu. The Macintosh IIx, IIcx, IIfx, and SE/30 lack HD20 support in their stock ROMs, so this replacement ROM enables those machines to use HD20 disks.

The pre-programmed ROM image also makes the Mac 32-bit clean, enabling it to use more than 8MB of RAM natively without the need for special system enablers or extensions. Some older Macintosh models like the IIx, IIcx, and SE/30 have stock ROMs that are “dirty”, meaning they can’t support 32-bit addressing without ROM patches. Using the Mac ROM-inator II and the pre-programmed ROM image, the Mac SE/30 can support up to 128MB of RAM!

 
ROM Disk

ROM Disk

The built-in ROM disk is a 5.5MB bootable disk image containing System 7.1 and a collection of classic games. Using the ROM disk, it’s possible to create a diskless workstation without any physical disks attached. Once booted from the ROM disk, Appletalk file servers can also be mounted over a local network.

The ROM disk image is stored compressed in the module’s flash memory, and is decompressed on the fly as needed, in order to squeeze the largest possible disk image into the available space. This requires 1MB of RAM for caching of decompressed disk sectors, so a minimum of 2MB total system RAM is required. The ROM disk can be mounted as read-only, or as a read-write RAM disk.

 
Usage

When first powered on, the Macintosh will play a customized startup sound, and display diagnostic info about the amount of installed RAM, the current addressing mode, and the detected ROM disk type. After a moment, a startup menu will be displayed. To boot from the ROM disk as a read-only disk, press the R key on the keyboard. Or to convert the ROM disk into a writable RAM disk, press the A key. If no keys are pressed after five seconds, the Macintosh will boot normally from an attached SCSI disk, or wait for a floppy disk to be inserted.

romdrv-splash romdrv-splash2

 
Programming

The Mac ROM-inator II’s flash memory can be reprogrammed using an external SIMM programmer, providing the ultimate in customization. There’s 4MB of flash memory available for any purpose, like a custom ROM disk image, alternate ROM code, digitized sounds, user interface tweaks, or other crazy experiments. Using compression, this is enough for the base 512K ROM image plus a roughly 5.5MB uncompressed disk image. Or fill the whole space with a collection of different base ROMs, selected from a startup menu. Go crazy!

Programmer software

The SIMM programmer is currently a DIY project you can build yourself. See the schematics and PCB files, firmware, and host software on GitHub.

A second-generation SIMM programmer will be ready at the BMOW Store in summer 2016.

Happy ROM hacking!

Read 8 comments and join the conversation 

Lower International Shipping Costs!

international-shipping

Good news for BMOW fans from outside the United States: international shipping costs for most orders should now be substantially lower than before. I wrote about the pain of international shipping costs a few weeks ago, and ever since then it’s been on my mind. Since about half of all BMOW customers are outside the US, I want to do everything I can to make their shopping easy and inexpensive, and I’m glad I’ve finally been able to address the shipping issue.

So how did I do it? The weight-based shipping rates haven’t changed, and those are set by the US Postal Service. Since I can’t lower the postage cost for a given weight, I instead focused on reducing the weight for a given item by using ultra-light packaging material whenever possible. Instead of shipping international orders in bulky and heavy cardboard boxes, many orders will now ship in padded mailing envelopes with a triple-layer of bubble wrap inside to protect the contents. I’ve tested this to several different destination countries, and it’s proven to protect the contents just as well as a box. Lower weight means lower shipping costs, so everybody wins.

International shipping costs for typical orders will be 40% cheaper thanks to this change. Most orders will now fall under the critical 0.5 pound threshold where higher postage rates take effect. Overseas customers will see typical shipping costs reduced from $24.25 to $15.25, and Canadian customers will see a reduction from $17.00 to $11.00. Hooray!

Read 2 comments and join the conversation 

FC8 – Faster 68K Decompression

compress-data

Data compression is fun! I’ve written a new compression scheme that’s designed to be as fast as possible to decompress on a 68K CPU, while still maintaining a decent compression density. I’m calling it FC8, and you can get the generic C implementation and optimized 68K decompressor from the project’s Github repository. I’ve probably reinvented the wheel with this, and in a non-optimal way too, but once I started I found that I couldn’t stop. My intended platform for FC8 is 68020 and 68030-based vintage Macintosh computers, but it should be easily portable to other classic computers, microcontrollers, and similar minimal systems.

The main loop of the 68K decompressor is exactly 256 bytes, so it fits entirely within the instruction cache of the 68020/030. Decompression speed on a 68030 is about 25% as fast as an optimized memcpy of uncompressed data, which is essentially an unrolled loop of 4-byte move.l instructions with no computation involved. Compared to that, I think 25% is pretty good, but I can always hope for more. :-)

In the previous post, I described how I was using compression to squeeze a larger rom-disk image into a custom replacement Macintosh ROM that I’m designing. I began with a compression algorithm called LZG, written by Marcus Geelnard. It worked well, but the 68K decompression seemed disappointingly slow. I tried to contact the author to discuss it, but couldn’t find any email address or other contact info, so I eventually drifted towards creating my own compression method loosely based on LZG. This became FC8. On a 68030 CPU, FC8 compresses data equally as tightly as LZG and decompresses 1.5x to 2x faster. FC8 retains much of the compression acceleration code from LZG, as well as the idea of quantizing lengths, but the encoding and decompressor are new.

The algorithm is based on the classic LZ77 compression scheme, with a 128K sliding history window and with duplicated data replaced by (distance,length) backref markers pointing to previous instances of the same data. No extra RAM is required during decompression, aside from the input and output buffers. The compressed data is a series of tokens in this format:

  • LIT = 00aaaaaa = next aaaaaa+1 bytes are literals
  • BR0 = 01baaaaa = backref to offset aaaaa, length b+3
  • EOF = 01×00000 = end of file
  • BR1 = 10bbbaaa’aaaaaaaa = backref to offset aaa’aaaaaaaa, length bbb+3
  • BR2 = 11bbbbba’aaaaaaaa’aaaaaaaa = backref to offset a’aaaaaaaa’aaaaaaaa, length lookup_table[bbbbb]

The encoding may look slightly strange, such as only a single bit for the backref length in BR0, but this produced the best results in my testing with sample data. The length lookup table enables encoding of backrefs up to 256 bytes in length using only 5 bits, though some longer lengths can’t be encoded directly. These are encoded as two successive backrefs, each with a smaller length.

The biggest conceptual changes vs LZG were the introductions of the LIT and EOF tokens. EOF eliminates the need to check the input pointer after decoding each token to determine if decompression is complete, and speeds things up slightly. LIT enables a whole block of literals to be quickly copied to the output buffer, instead of checking each one to see if it’s a backref token. This speeds things up substantially, but also swells the size of the data. In the worst case, a single literal would encode as 1 byte in LZG but 2 bytes in FC8, making it twice as expensive! All the other changes were needed to cancel out the compression bloat introduced by the LIT token, with the end result that FC8 compresses equally as compactly as LZG. Both compressed my sample data to about 63% of original size.

The 68K decompressor code can be viewed here.

 
Decompression on the Fly

Several people mentioned the possibility of on-the-fly decompression, since the intended use is a compressed disk image. That’s something I plan to explore, but it’s not as simple as it might seem at first. Disk sectors are 512 bytes, but there’s no way to decompress a specific 512 byte range from the compressed data, since the whole compression scheme depends on having 128K of prior data to draw on for backref matches. You could compress the entire disk image as a series of separate 512 byte blocks, but then the compression density would go to hell. A better solution would compress the entire disk image as a series of larger blocks, maybe 128K or a little smaller, and then design a caching scheme to keep track of whether the block containing a particular sector were already decompressed and available. This would still have a negative impact on the compression density, and it would make disk I/O slower, but would probably still be OK.

Ultimately I think the two decompression approaches each have strengths and weaknesses, so the best choice depends on the requirements.

Boot-Time Decompression:
Pros: Best compression density, fastest I/O speeds once the disk image is decompressed
Cons: 5-10 second wait for decompression at boot time, requires enough RAM to hold the entire disk image

On-the-Fly Decompression:
Pros: No wait at boot time, required amount of RAM is configurable (size of the decompressed block cache)
Cons: Worse compression density, slower I/O speeds, more complex implementation
 
Hardware Tests

I discovered that a Macintosh IIci in 8-bit color mode decompresses about 20% slower than in 1-bit color mode. But a IIsi decompresses at the same speed regardless of the color settings. Both machines are using the built-in graphics hardware, which steals some memory cycles from the CPU in order to refresh the display. I’m not sure why only the IIci showed a dependence on the color depth. Both machines should be faster when using a discrete graphics card, though I didn’t test this.

The original LZG compression showed a much bigger speed difference between the IIci and IIsi, closer to a 50% difference, which I assumed was due to the 32K cache card in the IIci as well as its faster CPU. It’s not clear why the discrepancy is smaller with FC8, or whether it means the IIci has gotten worse or the IIsi has gotten better, relatively speaking. Compared to the same machine with the LZG compression, FC8 is 1.57x faster on the IIci and 1.99x faster on the IIsi. Based on tests under emulation with MESS, I was expecting a 1.78x speedup.

 
Tradeoffs

While working on this, I discovered many places where compression compactness could be traded for decompression speed. My first attempt at FC8 had a minimum match size of 2 bytes instead of 3, which compressed about 0.7% smaller but was 13% slower to decompress due to the larger number of backrefs. At the other extreme, the introduction of a LIT token without any other changes resulted in the fastest decompression speed of all, about 7% faster than FC8, but the compressed files were about 6% larger, and I decided the tradeoff wasn’t worth it.

I explored many other ideas to improve the compression density, but everything I thought of proved to have only a tiny benefit at best, not enough to justify the impact on decompression speed. An algorithm based on something other than LZ77 would likely have compressed substantially more densely, or say a combination of LZ77 and Huffman coding. But decompression of LZ77-based methods are far easier and faster to implement.

 
Compression Heuristics

It eventually became obvious to me that defining the token format doesn’t tell you much about how to best encode the data in that format. A greedy algorithm seemed to work fairly well, so that’s what I used. At each point in the uncompressed data, the compressor substitutes the best match it can make (if any) between that data and previous data in the history window.

However, there are some examples where choosing a non-optimal match would allow for an even better match later, resulting in better overall compression. This can happen due to quirks in the quantizing of match lengths, or with long runs of repeated bytes which are only partially matched in the previous data. It’s a bit like sacrificing your queen in chess, and sometimes you need to accept a short-term penalty in order to realize a long-term benefit. Better compression heuristics that took this into account could probably squeeze another 1% out of the data, without changing the compression format or the decompressor at all.

Read 9 comments and join the conversation 

Optimizing Assembly (Fast 68K Decompression)

Kid-optimized-15-percent

Are you a 68K assembly language guru, or just good at optimizing code? I’m working on a project that’s a replacement ROM for old 68K-based Macintosh computers, part of which involves decompressing a ~5MB disk image from ROM into RAM to boot the machine. This needs to happen fast, to avoid a lengthy wait whenever the computer boots up. I selected liblzg specifically for its simplicity and speed of decompression, even though it doesn’t compress as well as some alternatives. And the whole thing works! But I want it to be faster.

The compressor is a regular Windows/Mac/Linux program, and the decompressor is a hand-written 680000 assembly routine from the lzg authors. It works well, but decompression on a Mac IIsi or IIci only runs about 600K/sec of decompressed data, so it takes around 5-20 seconds to decompress the whole rom disk depending on its size and the Mac’s CPU speed.

The meat of the 68000 decompression routine isn’t too long. It’s a fairly simple Lempel-Ziv algorithm that encodes repeated data as (distance,length) references to the first appearance of the data. There’s a brief summary of lzg’s specific algorithm here. Anyone see any obvious ways to substantially optimize this code? It was written for a vanilla 68000, but for this Mac ROM it’ll always be running on a 68020 or ‘030. Maybe there are some ‘030-specific instructions that could be used to help speed it up? Or some kind of cache prefetch? There’s also some bounds-checking code that could be removed, though the liblzg web site says this provides only a ~12% improvement.

The meat-of-the-meat where data gets copied from source to dest is a two-instruction dbf loop:

_loop1:	move.b	(a4)+,(a1)+
	dbf	d6,_loop1

If any ‘030-specific tricks could improve that, it would help the most. One improvement would be to copy 4 bytes at a time with move.l instead of move.b. But the additional instructions needed to handle 4-byte alignment and 1-3 extra bytes might outweigh the savings for smaller blocks being copied. I think the average block size is around 10 bytes, though some are up to 127 bytes.

The loop might also be unrolled, for certain pre-defined block sizes.

Here’s the entirety of the decompressor’s main loop:

_LZG_LENGTH_DECODE_LUT:
	dc.b	1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
	dc.b	17,18,19,20,21,22,23,24,25,26,27,28,34,47,71,127
	
_lzg_decompress:
	// a0 = src
	// a1 = dst
	// a2 = inEnd = in + inSize
	// a3 = outEnd = out + decodedSize
	// a6 = out
	move.b	(a0)+,d1			// d1 = marker1
	move.b	(a0)+,d2			// d2 = marker2
	move.b	(a0)+,d3			// d3 = marker3
	move.b	(a0)+,d4			// d4 = marker4
 
	lea		_LZG_LENGTH_DECODE_LUT(pc),a5	// a5 = _LZG_LENGTH_DECODE_LUT
	
	// Main decompression loop
	move.l	#2056,d0			// Keep the constant 2056 in d0 (for marker1)
	cmp.l	a2,a0
_mainloop:
	bcc.s	_fail				// Note: cmp.l a2,a0 must be performed prior to this!
	move.b	(a0)+,d7			// d7 = symbol
 
	cmp.b	d1,d7				// marker1?
	beq.s	_marker1
	cmp.b	d2,d7				// marker2?
	beq.s	_marker2
	cmp.b	d3,d7				// marker3?
	beq.s	_marker3
	cmp.b	d4,d7				// marker4?
	beq.s	_marker4
 
_literal:
	cmp.l	a3,a1
	bcc.s	_fail
	move.b	d7,(a1)+
	cmp.l	a2,a0
	bcs.s	_mainloop
 
	// We're done
_done:
	// irrelevant code removed
	bra _exit
 
	// marker4 - "Near copy (incl. RLE)"
_marker4:
	cmp.l	a2,a0
	bcc.s	_fail
	moveq	#0,d5
	move.b	(a0)+,d5
	beq.s	_literal			// Single occurance of the marker symbol (rare)
	move.l	d5,d6
	and.b	#0x1f,d6
	move.b	(a5,d6.w),d6		// length-1 = _LZG_LENGTH_DECODE_LUT[b & 0x1f]
	lsr.b	#5,d5
	addq.w	#1,d5				// offset = (b >> 5) + 1
	bra.s	_copy
 
	// marker3 - "Short copy"
_marker3:
	cmp.l	a2,a0
	bcc.s	_fail
	moveq	#0,d5
	move.b	(a0)+,d5
	beq.s	_literal			// Single occurance of the marker symbol (rare)
	move.l	d5,d6
	lsr.b	#6,d6
	addq.w	#2,d6				// length-1 = (b >> 6) + 2
	and.b	#0x3f,d5
	addq.w	#8,d5				// offset = (b & 0x3f) + 8
	bra.s	_copy
 
	// marker2 - "Medium copy"
_marker2:
	cmp.l	a2,a0
	bcc.s	_fail
	moveq	#0,d5
	move.b	(a0)+,d5
	beq.s	_literal			// Single occurance of the marker symbol (rare)
	cmp.l	a2,a0
	bcc.s	_fail
	move.l	d5,d6
	and.b	#0x1f,d6
	move.b	(a5,d6.w),d6		// length-1 = _LZG_LENGTH_DECODE_LUT[b & 0x1f]
	lsl.w	#3,d5
	move.b	(a0)+,d5
	addq.w	#8,d5				// offset = (((b & 0xe0) << 3) | b2) + 8
	bra.s	_copy
 
	// marker1 - "Distant copy"
_marker1:
	cmp.l	a2,a0
	bcc.s	_fail
	moveq	#0,d5
	move.b	(a0)+,d5
	beq.s	_literal			// Single occurance of the marker symbol (rare)
	lea		1(a0),a4
	cmp.l	a2,a4
	bcc.s	_fail
	move.l	d5,d6
	and.b	#0x1f,d6
	move.b	(a5,d6.w),d6		// length-1 = _LZG_LENGTH_DECODE_LUT[b & 0x1f]
	lsr.w	#5,d5
	swap	d5
	move.b	(a0)+,d5
	lsl.w	#8,d5
	move.b	(a0)+,d5
	add.l	d0,d5				// offset = (((b & 0xe0) << 11) | (b2 << 8) | (*src++)) + 2056
 
	// Copy corresponding data from history window
	// d5 = offset
	// d6 = length-1
_copy:
	lea		(a1,d6.l),a4
	cmp.l	a3,a4
	bcc	_fail
 
	move.l	a1,a4
	sub.l	d5,a4
 
	cmp.l	a6,a4
	bcs	_fail
 
_loop1:	move.b	(a4)+,(a1)+
	dbf	d6,_loop1
 
	cmp.l	a2,a0
	bcs	_mainloop
	bra	_done

Another thing to note is that about half of all the data is literals rather than (distance,length) markers, and goes to the _literal branch above. That involves an awful lot of instructions to copy a single byte. A faster method of determining whether a byte is a marker or a literal would help - I plan to try a 256-entry lookup table instead of the four compare and branch instructions.

My final idea would involve changing the lzg algorithm itself, and making the compression slightly worse. For longish sequences of literals, the decompressor just copies bytes from input to output, but it goes through the whole _literal loop for each byte. I'm thinking of introducing a 5th marker byte that means "copy the next N bytes directly to output", for some hand-tuned value of N. Then those N bytes could be copied using a much higher performance loop.

Read 33 comments and join the conversation 

Capacitor Replacement in a Vintage Power Supply

recap

Capacitors don’t last forever – an unfortunate fact of life for those who collect vintage electronics. The common electrolytic capacitor is one of the most problematic. It’s the type that looks like a little metal can, and after a couple of decades electrolytics tend to start leaking corrosive capacitor goo onto the PCB. You may recognize the strange smell of fish as an early warning sign. Eventually the goo will destroy traces on the PCB, or the changing electrical properties of the capacitor will cause the circuit to stop working. If you want to preserve your vintage equipment, that’s when it’s time for a “recap”.

I have an old Macintosh IIsi computer that dates from around 1991. A few years ago it started acting funny and having trouble turning on, so I sent the logic board to Charles Phillips’ MacCaps Repair Service. He did a great job with the capacitor replacement, and the machine was working great again. But then a few months ago it started to develop new problems that pointed to the need for a power supply recap. I could rarely get it to turn on at all, and when it did, I couldn’t get it to turn off again without unplugging it. Simply plugging the computer into wall power without turning it on caused strange clicking noises from the PSU. And oh, that fish smell.

I was going to send the PSU off for a recap. After all, there’s a big warning printed right on the metal cover saying danger, do not open, no user-serviceable parts inside. And while there’s not much danger in a 5 volt logic board, there is a potential for real danger in a power supply drawing 5 amps at 110 volts AC. But then I thought no, I should really get comfortable doing this kind of work myself. I have the tools and the skills, just not the experience or confidence. What’s the worst that could happen? OK, it could blow up and catch fire, but I’ve got a fire extinguisher. :-)

psu1

There are 12 electrolytic capacitors in this power supply, whose types and values are listed here. Two of these are surface mount caps on a daughterboard that’s connected to the main PCB, and the others are all through-hole caps. Because I’m both timid and lazy, I really did not want to replace 12 caps. After reading this discussion thread from someone who did a similar repair, I decided to replace only the three capacitors that seemed most likely to be causing the problem. Two of these were the SMD caps on the daughterboard, which apparently are involved in some kind of PWM control circuit. The third was a 400V cap in the AC section of the power supply. It’s located directly next to some big heat sink thing, and has probably been slowly baking for 25 years.

To help with the job, I bought a cheapo vacuum desoldering iron. This makes desoldering of through-hole components easy. Just put the iron over the pin, hold for a second, then press the button to release the plunger and mostly all the solder is sucked up. I used this to desolder the daughterboard too. I had to revisit a few pins to get them totally clean, but overall the process was simple. I don’t do enough desoldering to justify the cost of a fancier desoldering gun with a continuous suction vacuum pump, so this seemed like a good tool for occasional use.

31py-xFEfFL

I removed the two SMD capacitors on the daughterboard with a hot air tool. I’m not sure how you would do that without such a tool – just rip them off with pliers? The hot air worked fine, except when I used tweezers to slide off the caps after the solder had melted, I accidentally pushed one of them right through a bunch of other little SMD components, whose solder had also melted, and ended up with a jumbled heap of little components half soldered together in a corner of the board. Ack!!

Here’s the daughterboard, before I wrecked it. The four components at bottom right were all pushed into a pile in the corner. A couple of them actually fell off the board, as did one of the pins. But with some patience I was able to separate them all and get things cleaned up, and I think I even put everything back where it was originally. :-) After removing the old caps, I cleaned up the board with isopropyl alcohol and a toothbrush to remove the capacitor goo.

daughterboard

The last step was soldering in new capacitors, and putting it all back together. Compared to everything else, that was a breeze.

When the time came for testing, I didn’t take any chances. I brought the whole machine outside, with a fire extinguisher in hand, ready for anything! I plugged it in, pressed the power switch, and… WOOHOO! It booted right up, and everything now looks a-ok. I can boot from the rear power switch or the keyboard power button, and the soft power-off function works again too. I feel like Superman!

This was my first time recapping anything, and I won’t be so timid about recapping next time the need arises. The whole process took about three hours, including lots of futzing around during disassembly and reassembly. If I hadn’t blundered by knocking off a bunch of unrelated SMD parts, I probably could have done the whole job in about an hour.

Read 18 comments and join the conversation 

Identify the Mystery Components

psu1

I’m planning to do a partial capacitor replacement on the power supply of my old Macintosh IIsi computer. After 25+ years, these capacitors aren’t in the best condition, and the PSU doesn’t work correctly anymore. When plugged in, it makes odd clicking sounds for a few seconds, then does nothing, and the computer won’t boot. Occasionally if I plug it in, unplug it, plug it in again, and say some magic words, I can get the computer to boot. But it’s clearly on its last legs, and the research I’ve done says replacing a few key capacitors will likely fix it.

After dismantling the PSU and removing its circuit board, I was surprised by some of the components I found inside. I’ve never looked inside a power supply before, so this was all new to me.

psu2

In the center is a relay. I’m not sure why there’s a relay inside a PSU, but there it is. At right is probably a transformer? It has some exposed wire windings, and is located close to where 110V AC comes in, so I assume that’s it. At left is… something. A capacitor? It looks like a rolled up length of plasticized paper, coated in oil.

psu3

Here’s a closer look at the mystery capacitor thing.

psu4

On the other side of the PSU circuit board are two white plastic towers. They look like they might be removable covers. What are these, and what mysteries do they hide?

psu5
psu7

At the end of the board opposite the AC power connection, there are two cylindrical components that look sort of like capacitors, but aren’t. They have vertical grooves cut into them at regular intervals around their circumference. The smaller of the two has 4R7 stamped into its plastic case, and the larger one is marked 830. Could these be some kind of high-power resistor, maybe?

Read 12 comments and join the conversation 

Older Posts »