I'm pleased to announce that we've officially open sourced and released FAODEL (Flexible, Asynchronous, Object Data-Exchange Libraries) on Github. The 1.1803.1 version is a snapshot of all our libraries: Kelpie (a key/blob service), Obbox (an async comm engine), Lunasa (a network memory management unit), NNTI (an RDMA portability library from Nessie), and Webhook (an in-app http server for interacting with your app). It's been copyrighted by NTESS and we've received permission to open source it from the Department of Energy under the MIT license. We even show up in DOE Code now.
This was the first time I've done an open source release at work, so it was an adventure figuring our what we had to do. The initial step was just getting all of our code together in one repo we could export. We wound up merging several repos together and refactoring the build system, which made the whole thing easier to use. We then ran our tests over and over on different platforms until the code was in a stable form that ran everywhere. Once all of that was in order, we started through the legal parts of the release process.
In order to release the software we needed to declare the license we were going to use and do a copyright assertion. My initial instinct with the license was just to do the MIT license since it's simple and open. When I talked it over with the group though, I started to see how the protections provided in other licenses (eg BSD or Apache) might be better for us. The discussions dragged on and got more complicated (at one point someone even roped in a prof at UCSC). I eventually got fed up and decided to go with my initial instinct- I just want people to be able to use the software, so the MIT license is just fine.
The next step in the legal process was figuring out the right way to insert the NTESS copyright message. I see a lot of code these days where they stamp both the copyright and the full license on the top of every source file. It drives me crazy because I hate scrolling through code just to figure out what the API calls are. I read that adding all this junk is not necessary from a legal perspective if you have it all documented in the top directory. However, one of my developers noted that he does appreciate seeing a legal note on the files so he knows where it came from after installation. I agreed that this was useful, and wrote a script to prepend all our source with the 3-line copyright our legal people asked us to put. I also had to mark up directories for third-party libs (eg tcmalloc) that we include but are not the original authors.
Next, I had to do a code review with a reviewer to make sure that nothing bad was going into the release. This linting process meant going through all the code and determining whether there was anything sensitive that could cause problems. In addition to the things I'm used to looking for in these reviews, we had to look for crypto-related things because an open source release has to be treated as an international export. Interestingly, the fact that we reviewed the code got it marked as an export controlled item. For a few weeks there we were technically rated as EAR99, the lowest export control they can place on something. Fortunately, after everything cleared in the process we were changed to a publicly releasable code with no export issues.
After all the signoffs, the lab submitted the release request to the Department of Energy for approval. The DOE has very positive policies for open sourcing software, so it wasn't much of a surprise that they OK'd NTESS's copyright assertion and open source release of this software. One of the perks of having DOE be involved in the process is that they route your info into gov code databases like DOE Code. According to one of the talks at the ECP meeting this year, we're supposed to be assigned a universal DOI record at some point. We're in the system now, but it doesn't look like the DOI has happened yet.
In any case it's great to be done with the release. I'm not expecting other people to use it, but at least we've got a placeholder now.
The code is now hosted at github:
As the PI for the data portion of Sandia's ATDM Data and Viz project, I needed to give a status update about our work at the annual all-hands Exascale Computing Project meeting in Knoxville. I put together the below poster, which talks about (1) improvement's we've made to SNL's IOSS mesh database library and (2) our work with FAODEL.
ECP Poster Poster presented at ECP Meeting
I'm going to be traveling to Knoxville, Tenseness in about a week to go to a big all hands meeting for the Exascale Computing Project. While Knoxville seems like a fun city, I'm dreading the travel because of the time change and the difficulty in flying there from the Bay Area. Knoxville's Airport is tiny and doesn't have many flights from this side of the country. Last year when I went to ECP my SJC to ATL flight was delayed and I was lucky to get the last seat on the last plane for the night (I had visions of renting a car and driving from Atlanta to Knoxville in the middle of the night).
While making a poster for this trip, I started thinking it'd be fun to use some of the airplane flight data in an example for Kelpie. I dusted off my datasets, learned the basics of Boost's Geometry library, and wrote some simple C++ examples that digested and analyzed my data. I then wrote a simple tool to identify flights that landed at a particular airport, and then dumped the entire day's track for those planes. The idea was that I wanted to know how far I could get from an airport without changing planes. I plotted the data in matplotlib using the plotting tool I wrote a while back.
As the plots show, you don't have many options if you want to go west from Knoxville. I didn't put it on the poster, but if you wanted to minimize travel pain for this conference and host it near a national lab, the right place to do it is at Argonne near Chicago. They have massive direct flights and are at least closer to the middle of the country. However, Chicago on February doesn't sound like the best idea to me.
It's official: I'm renaming my main project at work to FAODEL: Flexible, asynchronous, data-object exchange libraries. FAODEL (pronounced fā-ō-del) comes from a simplification of the Gaelic term faodhail, which is a land bridge used to cross between islands. Here are two examples between the Monach Islands in Scotland:
Nessie, Kelpie, and Scottish Names
My main project at work for the last few years has been writing data management services for HPC applications. Sandia's I/O group previously built an RDMA portability layer called Nessie to support the Lightweight File System (LFS). I initially built a key/value store on top of Nessie. I wanted to keep the Scottish monster theme going, so I decided to call it Kelpie (a kelpie is a sea horse in Scottish mythology that drags people to a watery death). As Kelpie evolved we started adding more packages with Scottish/Gaelic terms. We named our memory manager Lunasa (a Gaelic harvest festival) and our boot services became Gutties (a cheap gym shoe in Scotland). It didn't take long for us to realize that there were a lot of issues with using Scottish/Gaelic terms to name things. First, the words are often difficult to spell and pronounce. Second, we've had trouble finding other Scottish mythical beasts we could swipe. And finally, it seems like every Scottish word has a slang meaning that would make us hesitant to use it at a conference. As such, when we talk about our different software packages, we've been referring to them by our project name, which is "Data Warehouse".
ATDM "Data Warehouse" Origins and Problems
Three years ago the labs realized that they needed to do something different if they wanted to be able to have their codes scale up to exascale computing platforms. The ATDM project was formed to develop new software infrastructure that would allow new codes to achieve better performance than MPI-based approaches. The main idea was to use overdecomposition and task-dag programming models (aka, "asynchronous many-task" or AMT) to overcome dynamic load balancing problems while improving developer productivity. Existing frameworks (e.g., Charm++, Legion) didn't fully meet our requirements, so the DARMA team set about building a new AMT API that leveraged modern C++ features and could be retargeted to run on top of different runtimes (eg, DARMA on Charm++). From an I/O perspective applications needed a way to allow dynamically-placed tasks to exchange data with existing mesh databases and storage tools (all of which are built on static distributions). Our project was started to serve as a way to manage AMT's data in this context. Given that other AMT's used the term "Data Warehouse" for their storage, we became ATDM's "Data Warehouse".
The problem with the term Data Warehouse is that it has a specific meaning for I/O people. In the 1970's Bill Inmon started using Data Warehouse to refer to the idea of centrally storing/indexing all of an organizations data, instead of spreading it out among many smaller databases. Inmon has written articles that point out how NoSQL people have hijacked his term (which I agree with), so I've always cringed at having to refer to ourselves as ATDM's DW. It's difficult to change a funded program's name though, once it's on the books.
Faodail and Faodhail
Recently, we've been reorganizing our code so that we can go through the official open-source release process. We've generalized our scope so we can serve more than just DARMA, so I decided it was a good time to revisit project names. I found the name "Faodail", which people say is a "lucky find, usually of a lost item". That seemed like a good fit for a key/blob service, so we started using it (it even made its way into a paper an intern wrote). There were a few problems with faodail though: (1) we found it was difficult to pronounce, (2) the internet already had some references to it, (3) taking the "aod" out of "faodail" makes it fail, and (4) google translates faodail to "maiden" (??). It seemed like a bad idea.
I went back to the naming game and searched some more. After a lot of misses (and general disgust from my team), I noticed in WikiSource that the term right after faodail is "faodhail". The definition for faodhail is:
faodhail, ford, a narrow channel fordable at low water, a hollow in the sand retaining tide water: from N. vaðill, a shallow, a place where straits can be crossed, Shet vaadle, Eng. wade.
Looking around more, I found maps of different faodhails around Scotland. These are regions where the tides go out and leave a land bridge that lets you cross between the islands. This meaning is perfect for what our software does: we build communication libraries that let you move data between different application islands.
Faodhail was longer than Faodail and had the same problems. I finally realized that what I needed was to ditch the actual name and just make an acronym that fixed the problems. Changing the name that was shorter and more phonetic helped me a good bit. I finally worked out some words that fit: flexible, asynchronous, object data-exchange libraries (FAODEL). It's not great, but the words do relate to what we're doing. Once I convinced myself it was the right thing, it was easy to tell the team what I wanted and have the confidence to make it stick.
In the Fall we did an initial investigation into how Sandia's new SPARC application could leverage the Cray XC40's DataWarp burst buffers to improve I/O performance. As part of this effort we looked at four options: simply mapping existing IO to the burst buffer, writing checkpoints out using LANL's Hierarchical I/O (HIO) library, using DataElevator to route our HDF5 calls to the burst buffers, and writing checkpoints out via Kelpie. We presented what we'd learned to the SPARC lead and wrote up the lessons learned in this ECP report.
The main take away for us was that the easiest thing to do for our workflows was just use the DataWarp directives in slurm and write files out directly. HIO was more complicated than we needed it to be. DataElevator was not compatible with the per-rank file I/O that Sandia uses. We did write a checkpoint stub for SPARC that wrote results out to Kelpie and demo'd it to the lead. However, we warned that since the cp/rs data wasn't useful for downstream analytics, Kelpie would be overkill in most cases. SPARC was updated to have multiple I/O backends, but I don't think anyone used anything but the plain posix io version.
Recent high-performance computing (HPC) platforms such as the Trinity Advanced Technology System (ATS-1) feature burst buffer resources that can have a dramatic impact on an application's I/O performance. While these non-volatile memory (NVM) resources provide a new tier in the storage hierarchy, developers must find the right way to incorporate the technology into their applications in order to reap the benefits. Similar to other laboratories, Sandia is actively investigating ways in which these resources can be incorporated into our existing libraries and workflows without burdening our application developers with excessive, platform-specific details. This FY18Q1 milestone summaries our progress in adapting the Sandia Parallel Aerodynamics and Reentry Code (SPARC) in Sandia's ATDM program to leverage Trinity's burst buffers for checkpoint/restart operations. We investigated four different approaches with varying tradeoffs in this work: (1) simply updating job script to use stage-in/stage out burst buffer directives, (2) modifying SPARC to use LANL's hierarchical I/O (HIO) library to store/retrieve checkpoints, (3) updating Sandia's IOSS library to incorporate the burst buffer in all meshing I/O operations, and (4) modifying SPARC to use our Kelpie distributed memory library to store/retrieve checkpoints. Team members were successful in generating initial implementation for all four approaches, but were unable to obtain performance numbers in time for this report (reasons: initial problem sizes were not large enough to stress I/O, and SPARC refactor will require changes to our code). When we presented our work to the SPARC team, they expressed the most interest in the second and third approaches. The HIO work was favored because it is lightweight, unobtrusive, and should be portable to ATS-2. The IOSS work is seen as a long-term solution, and is favored because all I/O work (including checkpoints) can be deferred to a single library.