You all have probably heard of the Steam Engine, right? (And chances are you have never seen it; didn’t even bother to look up an image on the internet until now.) It’s something called an external combustion engine, where fuel is burned on the outside of the engine to create steam inside it, so that the steam pushes some object to do work. It’s mostly obsolete now because internal combustion engines have replaced them, et cetera, et cetera.
While the Industrial Revolution was the most happening thing during the 18/19th century, and the Steam Engine was all the rage, there was another type of engine that was going to be the future—the Stirling Engine. But it wasn’t. End of the line for this other weird external combustion engine.
Why is it weird?
If a mister Layman asks, I would say a Stirling Engine is one which will only operate through a difference in the temperature between two points (one inside the engine, the other outside). Doesn’t matter if the point inside the engine is hot and the outside is cold, or vice versa. It will work.
I was refreshing my Solidworks knowledge and found something fun on this youtube channel where this guy assembles a Stirling Engine. It was something I had never heard of before. Naturally, it got me curious to build one. I think it may be possible to animate this one using Composer. But that’s for another time. Below is a Stirling Engine model which takes heat from the outside to work.
There are hundreds of variations of the Stirling engine, and it would seem like an ideal engine to use, as if it would run forever! It barely makes sound while operating, unlike the annoying ones we have to persevere through in the traffic. They are, however, as efficient as a Diesel engine. Some people think these engines can be made into sustainable sources of recycling heat waste, which is pretty cool for the environment.
But no, it still can’t run forever because that’ll break thermodynamics, make Carnot mad, and the universe as we know it will implode if we make Physics angry!
The study investigated the surface roughness of 3D printed objects through a statistical experiment and X-ray computed tomography of their shell and internal structure, to determine the optimal configurations for print quality.
A wide range of optimal configurations were determined. The study confirms the existing literature that infill levels do not play a major role in the surface quality of the printed objects. Pigmentation of the material does not influence the final surface quality at the chosen temperature. However, natural PLA is consistently present in all the sets of ideal configurations. The material, however, does shrink, adding to the unevenness of the surface and overall dimensions. Shape plays the most important role in deciding the surface quality of these objects.
When an appropriate configuration is used, it is possible to minimize the number of rejected prints and avoid the wastage of filament. The study also shows that tapered objects such as cones will show more unevenness on their external surface when compared to non-deformed objects like cylinders. This confirmation is extremely useful when it comes to performing appropriate design choices while printing, and making additive manufacturing more sustainable.
The future direction is to investigate the surface quality of the objects when printed with and without support structures while also considering polyhedral objects.
Adam G A O, Zimmer D (2015) On design for additive manufacturing: evaluating geometrical limitations, Rapid Prototyping journal, 21/6:662-670. DOI 10.1108/RPJ-06-2013-0060
Afrose F M, Masood S H, Iovenitti P, Nikzad M, Sbarski I (2015) Effects of part build orientations on fatigue behavior of FDM-processed PLA material, Progress in Additive Manufacturing 1: 21. DOI: 10.1007/s40964-015-0002-3
Alfaghani A, Qattawi A, Alrawi B, Guzman A (2017) Experimental Optimization of Fused Deposition Modelling Processing Parameters: a Design-for-Manufacturing Approach, Procedia Manufacturing, Open Journal of Applied Sciences, 7, 291-318. DOI 10.4236/ojapps.2017.76024
Armillotta A (2006) Assessment of surface quality on textured FDM prototypes, Rapid Prototyping Journal 12/1:35-41. DOI 10.1108/13552540610637255
Babout L (2006) X-Ray Tomography Imaging: A Necessary Tool for Material Science. Automatyka 10:117–124
Bill V, Fayard A (2017) Building an Entrepreneurial and Innovative Culture in a University Makerspace. URL https://peer.asee.org/27985, accessed 17 July 2017
Boschetto A, Veniali F (2010) Intricate Shape Prototypes Obtained by FDM, International Journal of Material Forming 3/1:1099-1102. DOI 10.1007/s12289-010-0963-1
Cruz Sanchez F A, Lanza S, Boudaoud H, Hoppe S, Camargo M (2015) Polymer Recycling and Additive Manufacturing in an Open Source Context: Optimization of Processes and Methods. pp 1591–1600
Cuiffo M, Snyder J, Elliott A, Romero N, Kannan S, Halada G P (2017) Impact of The Fused Deposition (Fdm) Printing Process on Polylactic Acid (PLA). Chemistry and Structure Appl Sci 7:579. DOI 10.3390/app7060579
Di Angelo L, Di Stefano P, Marzola A (2017) Surface quality prediction in FDM additive manufacturing, International Journal of Advanced Manufacturing Technology 93: 3655. DOI 10.1007/s00170-017-0763-6
Freitas D, Almeida H A, Bártolo H, Bártolo P J (2016) Sustainability in extrusion-based additive manufacturing technologies. Progress in Additive Manufacturing 1:65–78. DOI 10.1007/s40964-016-0007-6
Gajdoš I, Slota J (2013) Influence of Printing Conditions on Structure in FDM Prototypes. Tehnički vjesnik 20:231–236.
Garlotta D (2001) A Literature Review of Poly(Lactic Acid). Journal of Polymers and the Environment 9:63–84. DOI 10.1023/A:102020082
Galantucci L M, Bodi I, Kacani J, Lavecchia F (2015) Analysis of dimensional performance for a 3D open-source printer based on fused deposition modeling technique, Procedia CIRP 28:82-87. DOI 10.1016/j.procir.2015.04.014
Huang T, Wang S, He K (2015) Quality Control for Fused Deposition Modeling Based Additive Manufacturing: Current Research and Future Trends, The First International Conference on Reliability Systems Engineering. DOI 10.1109/ICRSE.2015.7366500
Jensen M, Wilhjelm J E (2007) X-Ray Imaging: Fundamentals and Planar Imaging. URL-http://www2.compute.dtu.dk/courses/02511/docs/X-RayAndCT.pdf, accessed 17 July 2017
Lindermann C, Jahnke U, Moi M, Koch R (2012) Analyzing Product Lifecycle Costs for a Better Understanding of Cost Drivers in Additive Manufacturing, 23rd Annual International Solid Freeform Fabrication Symposium. pp 177-188
Mitra A (2012) Fundamentals of Quality Control and Improvement, seventh edn. John Wiley & Sons, Inc., Hoboken, New Jersey
Montgomery DC (2013) Design and Analysis of Experiments, eighth edn. JohnWiley & Sons, Inc., Hoboken, New Jersey
Polak R, Sedlacek F, Raz K (2017) Determination of FDM Printer Settings with Regard to Geometrical Accuracy, Proceedings of the 28th DAAAM International Symposium. pp 561-566
Pérez M, Medina-Sánchez G, Garcia-Collado A, Gupta M, Carou D 2018 Surfaace Quality Enhancement of Fused Deposition Modeling (FDM) Printed Samples Based on the Selection of Critical Printing Parameters, Materials 11:1382
Rahmati S, Vahabli E (2015) Evaluation of analytical modeling for improvement of surface roughness of FDM test part using measurement results, International Journal of Advanced Manufacturing Technology 79:823-829. DOI 10.1007/s00170-015-6879-7
Redwood B, Schöffer F, Garret B (2017) The 3D Printing Handbook: Technologies, Design and Applications, first edn. 3D Hubs, Amsterdam
Valerga AP, Batista M, Puyana R, Sambruno A, Wendt C, Marcos M (2017) Preliminary Study of PLA Wire Colour Effects on Geometric Characteristics of Parts Manufactured by FDM. Procedia Manufacturing 13:924–931. DOI 10.1016/j.promfg.2017.09.161
Wittbrodt B, Pearce J M (2015) The Effects of PLA Color on Material Properties of 3-D Printed Components. Additive Manufacturing 8:110–116. DOI 10.1016/j.addma.2015.09.006
Using an analysis software available with the scanner called CT-Analyser, it was possible to make measurements to determine the smallest of anomalies in the scanned objects. At first, the thickness of the layers of the scanned objects were measured. The results showed that the thickness was closer to the theoretical value. However, it also showed that each successive layer causes the print material to shrink , causing some layers to protrude outside the expected region, affecting the overall dimensions , and hence the surface quality, as seen in Fig. 8. The horizontal cross sections of 20% and 80% infill levels show that the infill doesn’t completely meet the wall of the object . As each successive layer is printed, the points where any two paths intersect show a higher amount of PLA deposition.
Fig 8 Uneven surface of a scanned hollow pink cone
The box plots of the average layer thickness in all the scanned objects and the average distance between consecutive edges in cones are compared as shown in Fig. 9. The horizontal lines in the middle of the plots indicate the median value. The layer thickness is a critical factor which directly affects the surface quality . The analysis shows that the layer thickness is close to the mean value, but always lesser than the expected value, indicating shrinkage; this is true for all scanned objects.
For a cone, each successive layer printed must be smaller than the previous layer under it, i.e., as they taper, their size gets consecutively smaller , hence the shape tends to worsen. The distances between the edges of two consecutive layers should be constant, since they are right circular cones. However, when this distance is measured using CT-Analyser, the values are highly inconsistent at all infill levels. This is especially visible in the upper layers of the cone in the scans, which can be seen in Fig. 8.
Fig. 9 Box plots of various measurements done on the scanned objects.
The tomographic images show irregularities in the final few layers of the cones, regardless of the infill. The pigmentation may influence certain properties , but they affect the surface quality the least. When the tomographic images from the natural PLA cylinder and pink PLA cylinder were compared, there was little to no difference in the surface evenness of their shell. However, it is to be noted that the insides of the hollow cylinders show the final layers sagging, in turn, leaving unnecessary frizzy material inside the shell, as seen in Fig. 5 and Fig. 6 in both pink and natural cylinders.
The infill doesn’t affect the surface quality of the object by much [3,21], as confirmed from the results in Table 2-4. The cylinder has an even surface. Even when it is hollow, the final layers are printed uniformly. The cones are uneven regardless of the infill, and their final layers tend to misalign. This is indicated by the tomographic scans, and the analysis from measuring the distances between two consecutive edges, which shows a high variability. The accompanied video shows the individual layers of a 20% infill pink cone. The transition of each layer (shown in light blue color) reveals unevenness in their edges since they are not perfectly circular, indicating surface roughness. The statistical experiment also repeatedly puts cones in the least ideal configuration, supporting the argument that tapered objects tend to have poorer surface quality.
The X-ray computed tomography shows the differences in the external shell of the prints as well as the internal structure of the infill. Each layer of PLA can be easily observed in the images. For simplicity, the most prominent image from among the hundreds of images in the dataset are represented in the Fig. 5-7. The number on their top right corner is the corresponding image taken from the captured dataset. The blue objects were not scanned due to the insufficiency of time and lengthy scanning process. Fig. 5-7 show one of the hundreds of images captured during the CT scan. The difference in the background colors is because of the adjustments made to get a good contrast wherever necessary.
Fig. 5 X-ray CT scanned images of the outer shell of all objects.
Fig. 6 X-ray CT scanned images of the vertical cross section of all objects
Fig. 7 X-ray CT scanned images of the horizontal cross section of all objects
There are very few differences in the morphological structures of the cylinders of either color with the same infill level. There are, however, variation in the way the infill is printed inside the shell of the objects. It is to be noted that in Fig. 6 and Fig. 7, the cross-sectional images with a darker background have a different contrast level than that of the images with lighter background. This happened during image adjustment using the software NRecon, which is used to reconstruct the shadow projections. It does not affect the analysis.
When performing the scans, it might not be possible to capture the entire object as the field of view of the receiver is limited (less than 30 mm). Hence only the top portion of the cone was captured. In the case of the cylinders, again, the top was captured, but since the diameter was around 30mm, the detector could only receive a little over 4/5th of it. So, only a part of the wall can be seen. Again, this does not affect the measurement, as the objects are symmetric about the vertical planes.
When all the images were finally scanned (seriously, I may have used the CT scanner more than Ph. D. candidates in a few weeks), there is only one process remaining: Analysis (and visualization, and interpretation, and inference, and the write up, and the dataset organization and many more).
The scanned images can be easily saved in popular formats such as png and tiff. This makes them easy to view in commonly available image viewing software which, we all know is Microsoft’s Photo viewer, because Google decided to discontinue Picassa (which is a shame, in my opinion)…
Skyscan comes with its own analysis software called CTAn, which can not only be used to analyze individual images, but also measure tiny flaws if found! Perhaps the most amazing feature is to analyze multiple images at the same time.
When the region of interest is set for an upper limit and lower limit of an image sequence. Threshold can be set for binary images for each image or the entire data sequence to see the histograms. With this, density can be found (i.e. the density of the poly lactic acid that is used to fill in the specimen in this particular case). In the end it is possible to find the mean total value of voxels (simply put, 3 dimensional pixels) and save the calculations if necessary, and it is, because we’re analyzing. Then, use them to calibrate the attenuation and compare results. The same process is repeated for each individual scanned specimen. Quite mind-numbing, but necessary for what I was doing. Below is an example of a high infill colored cone (20%, pink).
Another important feature of CTAn is to show Density Profiles of each slice of image. but the most underrated and less frequently used feature is to perform dimensional measurements, which was the primary focus of the Optimization of 3D Prints project. Dimensions such as layer thickness, empty space areas inside the object, position of each layer, alignment of layers, angle between two subsequent layers, thickness of the shell etc. could be calculated, which was a tedious task to perform (the things we do to seek the truth, am I right?). With this it was possible to do additional statistical analyses.
Then there is another software called CTVox, which is used to construct a 3 dimensional view (also called volume rendering in this case) of the internal and external morphological features of each specimen. I may upload videos of them in the future, but right now, there is only a picture as each of them can be a whopping 10 giga bytes (and they look beautiful)! Below is an example image of a volume render of a cylinder.
It is also possible to create moving Heat Maps in CTAn, if you know what you’re doing, like I showed in this particular post. Heat Maps are very cool (and so are oxymorons)!
With this, I conclude this (slightly comical) mini-series of showing how X-ray Tomography can be done, and how it was used for my project. Sorry, but there is no party (I meant Part-E).
Starting from the next time, we’ll return to Optimization of 3D Prints and finally see how the project ended.
Unfortunately, I have been busy with some other work the past few weeks and didn’t get the chance to write the Part D of my X-Ray Tomography mini-series. To compensate for that I have some footage of how the insides of the CT scanner used looks like with the mechanisms initiated. It is just a small approximately 52 seconds of an animated gif format of the video.
I would like to imagine the scan is going on or I’m performing the analyses while this is happening:
There is a good reason why this article is posted after more than one month. This is because I wanted to get a feeling for how long the scan takes in reality and re-live it… (Or, I may have lied and I was busy with something; we’ll likely never find out.)
Now that the scan is over, the images had to be reconstructed to get a 3D map of the scanned specimen. It uses an algorithm that I can’t seem to name, because I didn’t write it down anywhere. Fortunately, I know exactly what it does. It creates slices of images which show the density within the specimen at specific heights (i.e., find out how much empty space there is and how much filled space there is in each sliced layer the object).
The software NRecon was used to reconstruct the captured images into full-fledged 3D images of high quality. A correction must be made for each image to compensate for any deviations caused by the temperature during scanning, while the images were being captured. There are several options available for this procedure. Fortunately, since the scanner itself was new, the corrections were pretty minor.
Next is to compensate for beam hardening, if any. This is done to keep the densities flat. After this, would be ring artifact corrections, which is done to rectify the formation of rings in the reconstructed images due to miscalibration of the sensor—we’re all humans, so these things can happen during initial setup. Rings are bad. The images with them can be rectified, however. Below is an example of a ring artifact, which, again, is undesirable. This is especially true if your image begins to look like a vinyl record…
There is a tool for reducing noise in the image and smoothening them. The images can then finally be saved in various formats. Saved images can then be visualized and analyzed using other software. I have my images in BMP and TIFF formats.
Fortunately, batch processing of reconstruction is an option and large quantities of obtained images can be corrected and reconstructed with the click of a button, once the settings are ready. Imagine sitting and having to reconstruct each individual image, then save them, and then analyze them… it would take months, especially if the resolution is 4K!
The reconstruction process was done for 4 types of infill levels for colored cones, and both transparent and colored cylinders (a grand total of 12 types of data sets with 3 sub types within them, each) in my project. That genuinely took a lot of time.
With this, we end another part on this extremely interesting process of reconstruction. It looks short when you read it, but it is ridiculously convoluted, especially if you’re doing it for the first time. Next time we will talk about visualizing the images in 2D and 3D, and maybe a bit on how to analyze them.
Nope. Not yet. We have some housekeeping to do, before that happens.
The base of each specimen was flat enough that no additional materials (like wax) was needed to hold it in place. The paraffin film did a good job of keeping these relatively large (when compared to the X-Rays, of course) objects in one spot. Good. Now the specimen is in its place and its stage is mounted inside the scanning chamber. Close the door—safety first, remember? Yes, yes you do—and prepare for the beam to energize.
One of the advantages of living in today’s world is that you can make a software control your hardware without having to manually adjust things. Apparently, when they wanted to do these experiments, in the olden days, they used to manually adjust the stages for each performance (no, that was a lie). The SkyScan software was used to adjust position of the specimen—vertical, horizontal, radial, you name it.
It was necessary to adjust the voltage and current, so that the power of the X-Rays emitted but the emitter gun would always be around 10 watts and never exceeded it. This was done to adjust the contrast of the images. The voltage and current were 44 kV and 222 μA for this experiment. (Psssttt… multiply the two to get the power).
Next was an important step called Flat Field Correction, which I’m fondly going to call FFC and never use it again. This step is used to have a uniform brightness in the background and calibrate the sensor on the other side. This was when the resolution, pixel size, etc. had to be chosen.
And finally, it was time to scan. The step angle of the stage’s rotation was set to an appropriate amount to not waste the time. Smaller the angle, longer the scanning time. I’ve seen some scans happen for days!
The camera sensor was set to capture multiple images during each rotation step and average them out to reduce errors and smoothen the final image by a process called Frame Averaging. There was also Random Movement correction to take care of any dead pixels, because cameras are dainty and don’t age well.
As far as I can remember, there was also an option to turn the X-Ray off or leave it on after the scan ends—off should technically have been the only option… because we’re talking about X-Rays here.
Alright! Now, the scan has begun, and below is an image of the Scanner preparing to do its thing.
Here are some of the details I collected from the log file, because obviously, I can’t remember everything that happened in August 2017 at them moment (Yes, that was when the scan was done, and the actual project had started months before that):
Source Type = Hamamatsu 100/250 Camera = SHT 11Mp camera Camera Pixel Size (μm) = 9.00
Source Voltage (kV) = 44 Source Current (μA) = 222
Frame Averaging = ON (6) Random Movement = ON (8) Vertical Object Position (mm) =33.693 Exposure (ms) = 238 Rotation Step (deg) =0.300
These are only some of the setting. There were more; I just don’t want to make these posts extremely technical. More stuff on reconstruction, visualizations, and analyses another time.
X-Ray Computed Tomography is a technique constantly used in medical imaging. You might have heard about CT Scans… Sounds very Sciencey, Medical, and Technical. Because, it is. The CT in CT Scans is short for Computed Tomography. CT Scans can also have non-medical application. For example, seeing what is inside of non-living things, like 3D printed objects—my topic of interest in this instance.
Why am I explaining something I had previously mentioned in a post (kind of)?
Because this is a filler post!
No, it’s not a filler, but I wanted to explain how exactly I did the Tomographic scans of the 3 D printed objects for the Optimization of 3D Prints project, before completing its story through more posts.
The CT Scanner used during the whole process of analyzing the 3D objects was Skyscan 1172 Micro-CT Scanner. Before beginning a scan, safety precautions must be taken, everything should be kept clean, the scanner must be hooked to a computer with powerful graphics card, relevant software must be installed, and this is very important—the power must be turned On. Of course, the machine is designed with all the safety measures taken into consideration. In fact, the X-ray source won’t work if the compartment (look at the image below) is open, even if it is told to start the operation by the software. But still, the most important thing to keep in mind is that the machine is a powerful source of X-Rays, so safety first!
Now we are ready to place the object of interest (which we usually call, a specimen) on one of the many pedestal-types (they are also called stages for some reason, as if the camera is taking the photos of some super model) for the specimen to be scanned, which in this case would be all the different types of printed objects of different infill. The specimen was covered in a paraffin sheet to keep it in place. Paraffin sheet is used because it is transparent to the X-Rays (i.e., the X-Rays ignore it, like a person ignores their ex). These initial settings are a bit convoluted, but they must be performed to capture good quality images without the formation of unnecessary rings in the final images (for now, take my word for it, I know what I’m saying when it come to this). The Skyscan1172 software helps in doing all of these initial operations such as adjusting the voltage and power levels, adjusting the pixel size of the images to be captured, the field of view, position of the object, and other relevant parameters can be adjusted with the software.
I don’t want to make this post boring with even more technical details, so in a layman’s example, the setup can be seen (in the image above) for a cylinder inside the compartment of the CT scanner. Behind the cylinder is the camera/sensor, and on the left side (the open square box), is the X-Ray source. More on the technical aspects another day. (Because I want to try and explain some of the details about why this process is complicated and takes time, and the analysis of the data obtained from this takes even more time).
Once the images are captured, certain settings need to be adjusted and images must form the whole picture in the end for analysis: size of the image to be captured, position of the camera, etc. This and more can be done in a software meant to be used with the CT scanner called NRecon. The software shows a captured image. As an example, the image (see below), shows the frustum of a cone when observed by the sensor at a particular angle.
The turn-table-pedestal-stage-thing will rotate the specimen as the stationary sensor captures sectional images, while all along, the source showers the specimen with X-Ray beams. These images are reconstructed using NRecon, where the HSI levels, contrast and other adjustments need to be made before reconstruction of captured image for analysis. 1000 images each were captured for a variety of infill levels and objects of different shapes in my project (Honestly, this number is nothing. You should see the Biology researchers have a go at it for the real deal). The image below shows a snapshot of some of the settings chosen, which was kept constant for all the different objects.
In Part B, I’ll get to the real stuff: the Process of Everything!
P.S.: You guessed it right, this post was supposed to come out a week ago, and it did! However, due to unforeseen circumstances, I couldn’t finish writing it. Unfortunately, the draft was scheduled to be published back then. Even what’s written in this article is incomplete, which is obvious from the Part A in the title. I need more time to finish Part B.Who knows, there might even be a Part C, Part D… Party! We’ll see… Woe is me…
What a random topic to write about while in the middle of my summarization of results of Optimizing 3D Prints!
In Feb 2016, some of us at NYU started working on a Food Waste related project because a lot of food from events on the campus was going to waste. We started something called Project Avocado. This later became the NYU Freedge, where people could leave leftover food from university related events or other places inside a smart refrigerator. Smart because it could keep a count of the number of times the refrigerator was being used.
This project’s central theme was increasing sustainability and affordability. Tackling food waste across the University campus slowly evolved into tackling food insecurity among the students who attended NYU. For over a year, the project kept growing. However, due to some complications and also my graduation from NYU in 2017, the project came to a halt.
Recently, around Fall 2020, a new team showed interest in the project and revived it. The project is also under a new “management” to speak of, with the NYU MakerSpace (It is the same place where I made all those machined, 3D printed, laser cut etc. artifacts that can be found posted all over this blog).
Why am I writing about this out of the blue? I haven’t mentioned about it (more or less) anywhere on this particular blog.
That’s true. But, it was because the project had its own blog. I’m not a big fan of reposting stuff, even if it is my own work. However, the Project’s blog was highlighted on the left panel of this blog until early 2018. I had removed the link because it seemed like the NYU Freedge had ended its run.
Why did I wait until now to write about this topic?
I wanted to wait until at least two semesters to see if it would continue or if the team would lose interest. (Sorry, if I sounded a little mean here, that was not my intention. There were a myriad of hurdles, bumps, and barriers that Professor Anne-Laure Fayard, I, and some of the old team members had to cross to get certain things done to even start this project. There also was one of the longest chain of email exchanges I had ever been a part of).
The reason for this particular post is, in a way to give the new team a shout out, and also say that I want to include it on the left panel again. Because, it makes me really happy that someone took the initiative to bring this amazing project back to life.