A few years ago I bought a 3D resin printer so the kids and I could learn a little bit more about modeling and fabricating 3D objects. While it's been a great experience, we haven't printed much in the last year because of all the headaches of dealing with resin. Every time we do a print we have to deal with temperatures, level the plate, put on all the safety gear, and then clean up everything at the end. It's a lot of overhead and dangerous enough I don't want my kids doing it when I'm not home. I've been thinking it would be nice to have a traditional FDM printer on hand to lower the barrier for printing simple things so that printing will be more accessible to my kids. After a lot of internet wandering, I decided to get the new Anycubic Kobra-2. It's new, works with Linux, and shipped from Amazon with a 1KG spool of filament for $300.
Setup
The kids and I setup the Kobra-2 on my desk in the garage. The assembly wasn't too difficult, although it took us a while to figure out how to hold the frame so we could get some of the machined screws lined up properly. It was also a little unclear how the feeder tube was supposed to go in the header (does this go any farther in?). Once it was setup we ran the auto calibration tool to probe the height of the build place. Auto calibration was a required feature for me, and one of the reasons why I'm happy to be buying a printer after the technology has had a chance to mature. We then preheated the filament and had it print the famous 3DBenchy boat design. The kids and I watched with wonder as the extruder spun around the plate with robot brrrrr noises. FDM printing is so much more exciting to watch than resin because you really see it happen. With resin the plate moves up and down every few seconds, with an upside-down design that's coated in excess resin. While you add a whole layer at a time, it takes a long time to get through all the pads and supports before you get to your actual design.
Sample Prints
3D Benchy only took 30 minutes to print out. One of the other selling points of this printer is that it can do higher speed prints (150mm/s to 250mm/s, compared to the 60mm/s of the stock Ender printers). I was really tempted to get one of the $600 Bambu printers, which can do up to 500mm/s, but decided we should start with a basic printer and see how much we like it first. Benchy came out looking pretty good, though you can see some pixelation in the windows that I don't think you'd have in resin. That's fine though- I think I'm more interested in building functional widgets with this printer than detailed figures.
The next thing we printed was a small mesh cup I pulled from thingiverse. This design came as a plain STL object so I had to load it into a slicer to render to gcode. Anycubic says to use PrusaSlicer, which is a powerful slicer built for Prusa printers. It's free and has a Linux version that worked on my Chromebook's Linux container. I had to download the settings from the Anycubic support site, but they came up fine. For this design I just loaded the cup, hit slice, and saved the gcode. Prusa had a lot of detailed info about how it built the object. I liked that it recognized the interior and autofilled it with a grid to save on material. The scaled down version of the print took about an hour to build (correctly predicted by Prusa). I was impressed that the printer was able to build a thin mesh and have it come out ok (though later I broke it trying to trim some of the base).
Next up was a micro-sd card holder. I found a clever design someone had made that had a radial container with a screw-on lid. The threading is really interesting to me because it gives you a way to connect parts together (someone also modified the design so you could screw together multiple micro-sd containers, though I doubt I'll ever fill this one). The parts I printed screwed together just fine. Two of the slots weren't deep enough, but that's ok. I should have added an up label though, as the slots don't have enough friction to keep cards in place if you open it upside down.
Finally, I printed a baby guardian dragon dice holder from Thingiverse for my niece. This design has a spot for you to put a die. It's a cute design, though the FDM version resulted in a bunch of lines on the angled surfaces.
Issues
We have had a few issues with the Kobra-2 during our first week of use. My son had a few failed prints that we're trying to figure out. The printer would get partway through the base of the design, get stuck, and then go into an endless calibration loop. It's possible this is because we installed a newer version of the slicer than we were previously using. When I went back and sliced the design with my chromebook it printed fine. Again, it's nice that the setup/cleanup for a print is so easy. The other main issue has been quality. The FDM prints look good, but they're not as detailed as the resin prints. Below are some zoom-ins that show how this results in the FDM prints coming out jagged in certain spots.
Power
One thing I've noticed about the FDM printer is that it the motors really get a beating, zig zagging back and forth all the time. Our house doesn't have great wiring, so the lights in the garage (and bathroom) flicker slightly when the printer is bouncing. Also, there's a spike in power when you start up because it needs to warm up the build plate and nozzle. Maybe I'll look into getting a battery or power conditioner for the plug to smooth out the signal.
Overall
Overall, I'm pretty happy with the Kobra-2 so far. After dealing with all the resin printing pains it's been a breeze to get FDM working. I don't think we'll print a ton of things, but it's nice to have the option to design and build stuff when we want.
While 3D printing bones from CT scans was fun, what I'd really like to do is scan in ordinary objects that I could manipulate and print. Professionals that do this kind of scanning use expensive LiDAR scanners or setup special rooms with calibrated camera arrays. Intel's RealSense L515 camera looked like it would be a promising, lower-cost LiDAR camera, but Intel just announced that they're killing off all the RealSense products (yet another failed intel product line). Apple's phones and tablets with LiDAR sound like what I need, but I just can't force myself to buy anything from Apple (not even that laptop that thieves just ordered with my credit card and shipped to my house by mistake!). That leaves me with plain old photogrammetry from images.
Qlone
Previous experiments with Meshroom left me feeling like I could get some good results, but only if I put a lot of effort into doing the photography right. In hopes of finding something that was more integrated with a phone, I skimmed websites and found some positive reviews of an Android app named Qlone that could scan simple objects using just a phone. The clever part about their approach is that they use a specially formatted mat that you print out to help orient the camera. I downloaded the free version of the app, printed the map, and set off to scan in an old Max Headroom candy container I bought from a gas station back in the eighties (interestingly, someone else has already scanned this in and posted it to thingiverse).
One of the nice things about Qlone's approach is that the mat makes it possible for the camera to figure out its orientation in real time. It renders a dome over your camera view while you're scanning so you can see what angles need to be done. While I initially tried moving the phone around the object to get all the pictures, it was a lot easier to hold the camera at one orientation and then just rotate the mat. Once the app had pictures from all angles, it went through the math to chip out a 3D model. While the model did look like Max Headroom, the quality was pretty low. This might have been because the subject was plastic (photogrammetry has trouble with glossy surfaces), but I'd also suspect its difficult to get good results out of a phone app.
The free version didn't have a way to export the model in a useful way, so the only result I have is this gif. The app baited me with a "buy in the next 20 minutes and get a discount" option. I declined though, figuring anything with a time-limited sale like this probably isn't worth it. While I really liked the user interface for this tool, it didn't produce results that would be good enough for printing.
Back to Meshroom
I decided to hunker down and spend some time experimenting with how to take pictures that would be better for meshroom. People online suggested using a prime lens and constant camera settings, so I impaled Max on a pvc pipe, set him up in my garage, and the did a sweep of pictures using my Canon D77 with a 50mm EF prime lens. I (wrongly!) guessed that I should use a low f-stop (ie, F/1.8) to maximize the blur in the background, hoping that this would make it easier for Meshroom to pick out the foreground. About midway through the pictures I realized that the lighting wasn't very even in the garage and that several of my photos were out of whack due to me having to step around junk sitting around my garage. As I feared, Meshroom was only able to infer about half of Max's head. I moved Max's pipe outside and did another round of pictures in the sunlight.
Meshroom did a much better job reconstructing Max with the second round of pictures. The results are bumpy, but the general shape is there with a decent amount of detail. I was impressed at how well it found the lines in the glasses and the creases in his forehead.
Scan Problems
While the scan looks pretty good visually, you begin to see more of the flaws when you take away the texture map and just look at the durface mesh. If you look at the arms of Max's glasses below, you'll notice that the mesh doesn't know there's a space between the glasses and his face. The texture map adds some color shading to the triangles that makes it look like the glasses are floating and casting a shadow on his skin. Unfortunately, I need the mesh to look good by itself if I want to make a good 3D print.
It's difficult for photogrammetry software to recognize separations like this without a lot of supplemental pictures that aim through the gap. If I wanted to scan Max in the right way, I'd probably pull the Max figure apart and scan each item separately. I'd probably also spray him with something to make the plastic less reflective. This is more work than I want to put into it right now.
Reading through some other web pages, I've realized that I got my camera settings backwards: primes are good for consistency, but the background should not be blurry. A presentation at the Slade School of Fine Art on Photogrammetry recommended an f-stop of F/8.0 as a starting point. Given the number of pictures that Meshroom threw away, I suspect it's useful to have some background detail to help figure out what's going on in the foreground. In any case, these are all good ideas to try out another day.
Files
Here's a copy of the mesh and textures: maxmesh.tar.xz
A few months ago I bought my son a Creality LD-002H Resin 3D Printer so we could play around with printing some simple objects. Like most people, we watched a lot of videos to get a better handle on how to (safely) work with resin, printed a bunch of example objects from Thingiverse, and then watched more videos to figure out how to make better prints. While printing models has been fun, I'd really like the kids to get a better handle on how to create their own objects through photogrammetry tools like Meshroom or editors like Blender.
The last few weeks I've been working through the details of how I could convert and CT scan of me into something I could print. Through a combination of open-source tools I was able to generate a mesh model of my pelvis and print a small version of it. I seem to forget how to use all these tools after a few days, so this post is just some notes for me to be able to recreate the process in the future.
Viewing CT Data in 3D Slicer
Back in 2013 I had to go to the hospital because I had a bad infection that needed surgery. The doctors ran me through a CT machine to get a better view of what was happening inside me. After my surgery I learned that Kaiser will burn a CD with your data on it for only $20 if you ask them. Being curious, I ordered a copy and poked around with it for a bit. While the data isn't in a format that I recognized, a viz friend pointed me at a tool called 3D Slicer (based on VTK) that's designed to look at medical data. It's a little confusing to get started, but you basically:
- Start Slicer
- Use 'Add Data' to point it at your CD
- Use DICOM to see the available entries in the data
- Select the entry with scan data (eg SAG 3MM) and Load
- Change to volume rendering and open the eye icon
- Adjust the Shift value to change the threshold level
I always stumble around with the default view settings. Usually the problem is that I haven't loaded a DICOM entry with anything in it (eg, a patient record) or I forgot to tell Slicer to view the data by turning the volume's closed eye into an open eye. Once the 3D data shows up, I switch to a 3D-only view by selecting View and Layout.
Extracting and Smoothing Contours
While the volume rendering tool is a nice way to poke through the data, what you really need to do is extract the contour for the bone so you can extract it as a mesh (ie, isosurfacing). To do this you need to run the Segmentation tool to build a mesh and then smooth it to make it more printable.
- Turn off the volume rendering
- Switch to the Segment Editor
- Select the Master volume to segment (note: default may not be right)
- Add a new segment and click Edit
- Click Show 3D to view
- Select Threshold, pick a value, and then apply
- Use Islands to remove background noise
- Use Smoothing to fix some bumps
It took some trial and error to find a threshold value that produced a good view of my pelvis (too low and you get tissue, too high and the bone starts disappearing). The two main issues I saw with segmentation were that some regions had small holes and some curving regions were bumpy like corduroy. Selecting the Smoothing option from the Segmentation options helped fix some of these problems. I'm sure you could do a lot more here to fix things- some of the videos I watched showed how to get precise segmentation by hand labeling points. I'm not printing an actual hip replacement so I didn't go into to much detail. Once I was done I clicked on the Segmentations button and exported to STL.
Cleanup in Meshlab
I pulled the STL file into MeshLab to verify it looked ok in another meshing tool and do some additional cleanup. While I think Slicer can do this cleanup, MeshLab seemed like an easier way to make some hand edits. I looked around in the mesh and manually removed some leftover polygons.
Adding Supports in Chitubox
The final step for me was to load the model into Chitubox to turn it into printable object. Chitubox is pretty amazing- it'll analyze an object and figure out what supports would be added to make it printable on a 3D printer. My neighbor does a lot of 3D printing and gave me a lot of tips on making good prints with Chitubox. eg, Make the back side of the model face the build plate so the supports don't leave bumps in important places, angle the structure to make it easier to build, use the skate support platform shape to make it easier to remove, etc.
I manually placed the pelvis model and rotated it slightly, but pretty much used the default settings everywhere else. Even with a ton of supports, Chitubox still wasn't too happy with the printing risk. Once it was done I adjusted the print's first layer exposure time to 60s and the remaining layers to 6s. Based on previous prints, if you don't expose the first layer for long enough it doesn't stick to the build plate and your print fails.
Printing, Cleaning, and Curing
The final step was to take the model to the printer and print it out. Resin printing is a pretty messy and dangerous process. You align the print head (crucial!), pour resin into the vat, load the vat into the printer, and then print the model. This print took about four hours, but I didn't have to babysit it after I verified the first few layers had stuck. When it finished, I chiseled the part off the build plate and dropped it in a pickle jar (w/ strainer) filled with isopropyl to clean off the uncured resin. After a lot of shaking, I took it out and clipped off all the supports (which is unusually satisfying). From there I did another quick rinse in isopropyl, patted it down with paper towels, and put it under a UV light to cure it.
I'm still new to 3D printing but I think it looks pretty decent. The main problem with this print is that I scratched it up quite a bit when I was cleaning it up with paper towels (the print is still pretty soft at this point). In the future I'll probably buy a curing station which simplifies a lot of the cleaning and curing problems. The other problem with the print is that there are some extra holes in the back because I set the contouring threshold value too high. At least that's what I hope- maybe I just have really thin bones.
Safety and Cleanup
I should point out that the resin I'm using is toxic when uncured and it's important to follow safety procedures when doing this kind of printing. I wear disposable nitrile gloves and safety classes whenever I work with uncured resin, and crack the garage door to help vent the area. When it comes time to handling wet prints my neighbor suggested a practice of having one clean hand and one dirty hand, since there's always something you need to grab and you want to minimize what gets dirty. I use an excessive amount of isopropyl to clean up the build plate, vat, and tools when I'm done. Fortunately, you can just leave all the dirty towels and gloves in the sun for 30mins to cure them and make them safe for disposal.
There was a lot of talk on local social media this week about a large military plane that flew over Livermore at a very low altitude. The town is already wound up about larger planes flying into Livermore because there's an expansion plan being discussed that would allow private charter 737s to land at the airport. Seeing a massive, loud military plane flying low over town made everyone wonder if that's what daily life is going to be like in the next few years.
There was a lot of speculation about what was going on with this flight. Some people thought it was emergency vaccine supplies being dropped off. Others claimed it was a military salute for a Lt. Col. in Pleasanton who was just awarded the Distinguished Flying Cross. In the end it turned out to be a C-17 Globemaster doing a practice landing approach at our municipal airport (see Brodie Brazil's video of the approach).
Tracking SOUND87
I went to my PiAware node and pulled the day's data to see what military flights took place on Wednesday. I found AE07E0 was active around the 4pm time period people were talking about. Interestingly, the flight used the call sign SOUND87, which made me wonder if this was some kind of sound test for the airport extension (I doubt it now- it seems like a routine test). As the below plots show, the plane flew in from the east, passed the airport, and made a sharp turn north. Zooming in on the east side of town, I noticed it flew right over LLNL at about 2K ft, just a block away from their $3.5B national ignition facility (NIF). I'd thought they had a no-fly zone over them, but that seems to only be for drones. Here is the raw data for the flight.
Debunking the Salute Theory
The idea that the military would dive bomb a city to show its appreciation of a soldier bothered me, so I went to FlightRadar24 and pulled up the data for the whole flight. As seen below, they took off from Vegas, circled the bay area, and then dropped in on Livermore. After that, they flew north to Concord and did a similar practice approach at Concord's municipal airport (CCR) before landing at Travis AFB. Given that they didn't fly anywhere near Pleasanton and they made a second drop somewhere else, I'd guess this has nothing to do with the Lt. Col's award.
Questioning the Airport Expansion
One thing this flight really highlights is that Livermore people will notice larger planes flying to our airport. While the C-17 is much bigger and louder than the 737s the expansion is targeting, it made a lot of people realize that the airport approach really does stretch all the way across town, starting at the $3.5B big science experiment at the lab. I hope enough people stand up to the FAA and prevent larger planes from being able to land there.
One of the weirder news stories that came out when Trump announced he had COVID-19 was that the US's doomsday planes are now hovering, poised to send out missile launch commands to submarines. It looks like this started when someone on twitter noticed that some of the US's Boeing E-6B planes were heading out to the oceans on the east and west coasts, and that these planes are the mobile command centers for coordinating with submarines. Twitter and Fox did what they do best and went off the rails trying to figure out what this all means. Fortunately, plane spotters like Christiaan Triebert and others properly dumped flight histories to show that these flights actually happen all the time. I didn't know anything about these planes so I spent the morning reading wikipedia and looking through my data to see if I could find them. Yep! There are some in CA and they do show up all the time! Relax.
Boeing E-6 Mercury Planes
From Wikipedia, the Boeing E-6 Mercury is a variant of the Boeing 707 that was made for the military to provide communication among resources in case ground systems are wiped out. There are 16 of these planes in use, and from ADS-B.NL you can learn that their ICAO ids are AE040D-AE041C (conveniently sequential in the military ICAO range). I've been leaving my flight tracker on all the time since the outbreak so I did some greps on my recent data. Sure enough, I found some hits in yesterday's data. Digging through all my data and plugging it into pandas yielded the below breakdown of how many days each plane flew near me over the last few months.
As the above shows, I saw five different E-6 planes, with some of them being active as many as 10 days out of the month. While the tracker was up a lot of the time, there were some gaps in March, August, and September (the tracker crashed without me knowing it for a week; I powered it off for a few days when the garage was over 110 degrees; we had a few power outages during the fires).
Heading out to Sea
Looking through the tracks, there are several instances where the planes fly out to sea and circle around a lot. The following tracks are from July 26, August 13, and September 14. As these tracks show, flying out to sea is not an uncommon event.
Information is Surprise
Information theory elegantly defines "information" as a measurement of how much surprise is in the data. Things that happen all the time are not news. Unusual events are. Reporting on these "doomsday planes" without giving some background info is providing news- but the news for most people is just that the US has these planes at all. Taking a broader look at the data you find that these flights do not seem to be related to Trump's health, and that we don't need to assume the worst just yet.