HAVE QUESTIONS? ASK PHIL

Have questions about CAD, Fusion 360, or the Portland maker scene? Ask Phil! He’s a Principal Software Engineer at Autodesk, inc. and teaches CAD at Portland Community College. He’s also the host of Community Conversations series: Getting started with 3D modeling in Fusion 360

You can reach him at phil.eichmiller@autodesk.com

Phil Eichmiller — Principal Software Engineer at Autodesk, Inc.

TUTORIAL: How to use ultra realistic Quixel Mixer materials with Fusion 360 [Part 2]

Welcome back! In Part 2, we’ll explore adding Quixel Materials to your designs in Fusion 360 and setting up a rendering scene. If you haven’t already, review Part 1 and install Quixel Mixer. You’ll want to create and export a mix for use in Fusion 360 prior to the steps in this tutorial, or download an example material set here.

First, let’s create a new project in Fusion 360:

  1. Creating a new Fusion 360 Project

After you open Fusion 360, Click “Save” and give your project a name. In this example I used “QuixelMaterialDemo.”

After you save your project, we’ll want to create a new component and make it active.

2. Create a new Component

This is generally a good practice with Fusion 360, because we can more easily manage changes made to the design when the timeline is broken up by individual component histories. Name your component “Floor” and then make sure “Activate” is selected (should be by default), click “OK” to continue.

Next, we’ll want to create a sketch to define the floor’s dimensions. Click “Create” and make a Center Rectangle on the bottom plane.

3. Create a Floor

Make your sketch 3 meters x 3 meters in size, with the Origin at the center. Click “Finish Sketch” to continue. If you’ve done everything right, then you should have a sketch that is fully constrained (i.e., you’ll see black lines instead of blue lines for the outer dimensions of your sketch).

Next, we’ll extrude the sketch below the plain. This will create a new body, based on our sketch dimensions.

Click Create and then Extrude. Then, extrude the sketch -1mm below the plane and click “OK.”

Next, Save the design. You’ve created your first body and now would be a good time to save your progress.

Note the reason for your save and Click “OK.”

Next, we’ll want to change the Appearance of our floor. Click Modify Appearance to bring up the Appearance Window.

4. Add material

Here we can see the default material for the Floor body. We’ll want to replace that material with our Quixel Mix. To do that, let’s start by downloading a similar material.

Note: in general, you’ll find it is easier to add Quixel Mixer materials when you adapt an existing material in Fusion 360 with similar attributes. In this case, we can use the existing Asphalt Material.

After the download finishes, click and drag the Asphalt material into your design.

We can then replace the default material with the Asphalt.

5. Replace Fusion 360 Material with Quixel Mix

Next, we can begin modifying the Fusion 360 Asphalt material with the Quixel Mix.

As mentioned in Part I, the materials in Fusion 360 are made up of individual map image files:

Albedo/Diffusion/Color — the color a material reflects

Normal and/or Height Maps — the bumps and imperfections along a surface

Roughness — the smoothness of a surface (ranging from a sharp reflection to fuzzy/diffuse)

Reflectance/Specular/Metalness — the reflectiveness of a surface (ranging from mirror finish to a dull surface)

Anisotropy/Ambient Occlusion — the shadows along a surface

Refractive —how light bends through a surface

Emissive — how much light a surface emits (glow)

Translucency/Opacity — how transparent a surface is to light

If you’re using the included sample images, you’ll find some but not all of these maps. Depending on what materials you’re mixing, you’ll need different image maps. The sample image package includes:

Floor_Diffuse.png — Color (placed in Parameters)

Floor_Roughness.png — Roughness (placed in Parameters)

Floor_Specular.png — Reflectance (placed in Parameters)

Floor_Normal.png — Normal (placed in Relief Pattern (Bump))

Floor_AO.png — Anisotropy (placed in Advanced Highlight Controls)

By replacing and adding these map files to the Fusion 360 Asphalt material, you can transform it to the Quixel mix. To start this replacement process, open the Appearance window, double-click the Asphalt material and then click “Advanced…”

Rename the material to “Quixel_Asphalt” to distinguish the material from the original Fusion 360 Asphalt.

Under Parameters, we can add three (3) image maps. First, we’ll apply the diffusion/color map to the Image input in Fusion 360. Click on the Image filename 1_mats_surface_asphalt_color.jpg and navigate to your replacement images.

Select your Albedo/Color/Diffuse map file. If you’re using the sample images, it’s the file named Floor_Diffuse.png. Click Open to replace the default image file.

Next, we’ll repeat the process with the Reflectance and Roughness maps. By default, these two material attributes are set as Slider values, click the drop down arrow and then select Image to replace the slider value with an image map.

Next, select the Metallic/Specular image map if you’re using the sample images, select Floor_Specular.png and click Open.

Next, repeat the same steps for the Roughness value. Select Image and then select your Roughness Map. If you’re using the sample images, select the Floor_Roughness.png.

Now that we’ve completed the three Parameter maps, we can move on to the Relief Pattern (Bump) map. Once again, we’ll replace the default image file (1_mats_surface_asphalt_.jpg) associated with the material. Note: Fusion 360 supports both bump and normal maps. If you want to know more about these two approaches to texturing a 3D model, then click here.

Next, we need to change the Relief Pattern from a Height Map to a Normal Map. To do this, we need to Edit the image.

Next, scroll down to Advanced and change Data Type to Normal Map.

Next, we need to ensure that all of our maps are using the same Sample Size. Be sure to repeat this step for all image maps. We also need to ensure that all of our Maps have Linked texture transforms. Check the Link texture transforms under the Transforms section of the Texture Editor. Be sure to repeat this step for all image maps.

These steps are important, because they ensure that all of the image map data are aligned equally to the material in Fusion 360. After you’ve verified these settings, you can click “OK” to finalize the changes to this material.

Now that the material has been updated you can Close the Appearances window.

To check and validate our new material, we need to switch to the Render Workspace in Fusion 360. Click on the Workspace button, and change it from DESIGN to RENDER.

6. Test render scene

Next, let’s save the design to capture the new material settings in your Fusion 360 Timeline. Click File and Save.

Fusion 360 will prompt you to describe your save point. Let’s name this save “Quixel Material Added” and click OK.

Before we can test our new material, we need to edit the SCENE SETTINGS from the SETUP Menu. Open the SCENE SETTING Window and Click+Drag “Dry lake bed” to the Current Environment and then Click Close.

We also need to change the IN-CANVAS RENDER settings to FAST, so that we can easily see the material’s performance during rendering. To do this, click on the IN-CANVAS RENDER SETTINGS icon and Click on the Fast tab. Then, Click OK to update the rendering method.

Next, we can preview the rendering, and see how the various maps work together under different lighting conditions. To do this, start the In-Canvas Rendering and then open Scene Settings, click on the Position Icon to bring up the Rotation and Scale Sliders. By changing the rotation, you can see how the surface of your floor object casts shadows at different angles, corresponding to the surface material.

Make sure to save your project to retain your rendering settings. If you’ve made it this far, then congratulations! You now have all of the information necessary to import Quixel Mixer materials in Fusion 360. In Part 3, we’ll explore some techniques for applying these materials to complex geometries, and how to post-process your images for additional realness. In Part 4, we’ll take these realistic models and generate Augmented Reality experiences for iOS.

Stay tuned!

Week 14 update: The Late Edition

The final push is now upon us. This past week I’ve been working nearly around the clock with my team, pushing to bring about our future vision. One of the most labor intensive, yet rewarding parts of this project has been the production of a newscast from the future. We’ve made countless script revisions, scraped stock images, sound, footage, and crafted motion graphics elements to bring this story to life. It’s been challenging, but I’m excited to see the final results.

What’s working: our approach to generating a video is deeply grounded in research. We’re incorporating concepts generated with participants — public educators who so generously gave us their time and perspectives on the present and future state of teaching in American schools. We’re also building our story to represent several systems-level shifts, including national legislation, teachers union contracts, and individual school reforms. We used several different futuring frameworks to develop these narratives, including: cone of possibility, backcasting, STEEP+V, Multilevel Perspective mapping, affinity mapping, and worldview filters.

Concepts+ MCCC - Version 2 MLP and STEEP+V Sorting.jpg
MLP_Past.png
futurescone-cdb-4.png

This process has been anything but precise. The future is something we build, not something we predict through careful measurements of trends. Understanding this truth has been very reassuring. Now that we are approaching a conclusion, I feel as though I have been on a long drive through undeveloped territory. The daylight of exploratory research gave way to the twilight of generative research and in the pitch of night we evaluated concepts. With only one headlight, we squinted off into the distance, to read the signs. Sometimes the precipitation of a pandemic obscured everything, but we relished the intermittent moments of clarity.

Those latter kinds of moment were by far the most exciting. “Oh, oh, what if…” was a common preamble to productive yet heady conversations with peers over zoom, as we scrambled together various visual representations in Miro and Figma. 

Concepts+ MCCC - Frame 26.jpg
Concepts+ MCCC - Frame 28.jpg

This workflow has been essential to synthesizing content and a visual language for our video, which we’ve been iterating on through various stages of prototyping. I’m concerned about the overall fidelity and recognize that this will be important to suspension of disbelief for our intended audience — policymakers and various stakeholders connected to PPS must find this artifact compelling enough to act and bring these concepts into a shared reality.

Concepts+ MCCC - Frame 29.jpg
Concepts+ MCCC - Frame 30.jpg
Concepts+ MCCC - Frame 31.jpg

On the technical side, video editing and motion graphics are computationally intensive tasks. I built a beefy workstation prior to starting at CMU, and this machine has been essential to so many tasks and assignments. Nevertheless, I’ve found that this work has strained my system’s capacity. I’ve purged files to make room for temporary caching and rendering outputs. I’ve reset my router in a desperate effort to speed up the transfer of data to Google Drive, and ran my system in a barebones state to maximize resources available to Adobe CC’s memory-hungry apps.

The stress I place upon the tools I use to design are complemented by the stress I’ve applied to myself. My sleep has been intermittent. I take short naps on the couch and found myself on more than one occasion this week working through the sounds of birds before the break of dawn. These late night hours are quiet and free of distraction, but tend to make the day that follows less than appealing. I’m staying awake through this last week of lectures, but finding my mind trailing off into thoughts about the timeline and how I might optimize frame rates for nominal render times. I’m obsessed with getting this video done, but know that this pace is not sustainable.

Kinetic-friendly spoon project Mega Post

That’s a wrap! It’s certainly been an interesting semester, but now I am ready to put it behind me. Reflecting on the spoon project, I have some final thoughts and observations. First, I want to thank the fine folks at CMU School of Design. From the amazing and hardworking faculty and graduate student cohort, I have had nothing less than inspiration and encouragement throughout this entire process, despite the obvious challenges of working remotely.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

This project was divided into two parts: the first part focused on exploring different ways of prototyping and making. This was described to me as an informal way of A/B Testing for methods. The second part involved the deliberate iteration of prototypes through user testing — a challenge in the context of a global pandemic and social distancing. To make the most meaningful design choices possible given limited resources, I decided to leverage the power of physical simulation to supplement the making of physical prototypes.

There are a variety of 3D software tools that offer some degree of physical simulation. For this project, I selected Maxon Cinema 4D R20 (Educational License) and Blender as my two ways of making. I chose these because I already am familiar with Cinema 4D and understand know how to manage a workflow in that context, because Blender is open source and free for anyone to use, and both programs work under MacOS and Windows environments (my rendering workstation is a Hackintosh with multiple operating systems, which grants the flexibility to overcome certain technical limitations). My initial experiments with Cinema 4D were… not great.

My very first (and failed) attempt to simulate fluids in Cinema 4D. Carnegie Mellon University School of Design Prototyping for Interaction Spring 2020

As you can see, there are “physics” happening here, but they are not anything close to the physics of the real world. This is not “real world” physics, this is Asshole Physics:

Zachary "Spokker Jones" Gutierrez and I came up with the term "Asshole Physics" when we were discussing the game and the physics models it employed. Basically there's a lot of crap you can knock over and kick around, including dead bodies, buckets, cans, and little sections of drywall which are standing around in the middle of rooms for no obvious reason. Zachary casually mentioned, "I have made it a point to knock over every fucking thing in that game. I am living out my fantasies of being a giant asshole," and I responded by stealing his "asshole" comment and claiming that I made it up. Thus "Asshole Physics" was born.

Without more sophisticated plugins to simulate fluid, Cinema 4D R20 is only “out of the box” capable of non-newtonian semisolids. I can make stuff bump around and “squish.” I can have a 3D character micturating on the side of a building. I can create the appearance and illusion of something like a fluid, but with such restrictions, I could not realistically evaluate my spoon designs. I explored my options and found that Next Limit’s RealFlow plugin would meet my basic needs. Best of all, they offer a free 30-day trial! My initial excitement quickly waned after the plugin failed to install and activate on my system…

(This email chain is long and covers a week of back and forth with customer service. I am including the entire conversation as a way to recreate my experience. While this may not directly relate to the scope of this project, I still believe that there is value in documenting the unexpected problems that crop up when trying to do something new.)

Mail_02.png
Mail_03.png
Mail_04.png
Mail_05.png
Mail_06.png
Mail_07.png
Mail_08.png
Mail_09.png

It took a week to finally get everything sorted with the demo. During that time, I began to explore option B: Blender.

Blender is a free, powerful, open source 3D creation tool. Best of all, it includes the mantaflow fluid simulation engine (since version 2.8). I have worked with Cinema 4D on other projects, and have become fairly comfortable with the interface. Given my experience with Fusion 360, Inventor, and C4D, I knew that I would need to overcome a learning curve before I could use this software to meet my needs for this project. Fortunately, I was able to find a spectacular tutorial series for beginners.

If you want to read more about my experience with the tutorial, click here.

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work stri…

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work strictly with parametric modeling, or you can discretely modify mesh geometry to create highly organic and imperfect forms. I’m excited to work with Blender on future projects. It’s a very exciting time to be working in 3D.

While working through these tutorials, I began sketching and working in Fusion 360 to craft my first spoon designs for part 2 of this project. You can read more about this experience here.

Takeaways from Part 1

I really appreciated the responsiveness from the team at Next Limit. Clearly there are problems with the software’s implementation of their product’s copy protection. This is an all-too-common problem in the world of software. Programmers gotta eat just like everybody else, and we certainly should make sure that the talented and hardworking folks behind the code are able to put food on their table at the end of the day. Piracy can deprive a small business of the necessary revenue to keep the lights on, so I am absolutely sympathetic to this reality and what risks are involved when you release your software for demo purposes. Getting people to pay for something that they can easily get for free is a challenging proposition. At the same time, you cannot realistically expect to get customers to pay for software if they cannot try it first. Ultimately, this one week of back and forth with customer support was a critical loss. I never completed a side-by-side comparison of fluid simulations. While I did eventually succeed at installing and using RealFlow to do fluid simulations, (and was honestly impressed with how easy it was) I did not, however, have enough time to setup a comparable simulation to evaluate spoon designs. My trial expired about a week ago, and I see this aspect of the project as a lost opportunity. If Next Limit applied similar licensing practices as Maxon (verify it through .edu email address), they could offer an educational package of their RealFlow plugin.

Blender really came through for me. The learning curve was aggressive, but not impossible. While I found mantaflow to be a respectable and entirely capable fluid simulator, it was not without its own share of issues. I spent a lot of time making granular tweaks to improve the fidelity of my simulations, while also using the observations from my simulations to inform design decisions for my spoons in part 2 of this project.

Part 2: Design Iterations Based on User Testing

While this project required user testing and design iterations based on feedback, I decided to limit the user evaluations to address handle shape and the spoon’s overall dimensions. This was not an arbitrary decision or an excuse to focus on physical simulation of fluid dynamics (with user testing as an aside). No, this decision was based on the nature of the course from which it was assigned: Prototyping for Interaction Design. This semester I have have been focusing on designing for interaction (arguably, all designers do, at some point in their process, focus on this aspect). When thinking about the tools we use (to eat food) as a system, it is important to consider the touchpoints involved. The handle of a spoon is a non-trivial component. It can take on many forms, and naturally includes affordances. How someone holds a spoon, and how easy it is for them to use it are central to the evaluation of the design.

The iterations of design were highly generative in nature, inspired by both user evaluations and physical simulations, I maintained a homeomorphic continuity: treating the initial shape as an elastic form to be molded and reshaped to maximize performance. Knowing how a concave shape might be optimized to perform under rapid movement — I wanted to create something that would be useful, and the physical simulation of fluids facilitated a means of evaluation — is only one aspect of a more complicated interaction, and this test alone could not fully address human needs. When physical form is designed and directed to improve user interaction (and physical properties are given equal consideration), it is possible to create a truly useful tool. I realize that this is a very technical description, but it is easier to understand when properly visualized. I have rendered a compilation sequence to show how this spoon shape evolved to its final(?) form (I am still considering a physical prototyping stage for this project over the summer).

A sequence of fluid dynamics tests designed to evaluate fluid retention of concave forms. Carnegie Mellon University, School of Design, Prototyping for Interaction, Spring 2020.

Toward the latter half of this sequence, you will notice a change in colors (for both the liquids and spoons). I decided to differentiate the final rendering sequences as these were based on user evaluations. The colors chose for these final sequences are based on the color tags used for the user test:

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

I printed and mailed the paper prototype to a potential user suffering from ongoing hand tremors (my partner’s mother). I sent this without written instructions. Instead, I only provided different color tags to facilitate feedback. My user let me know that the red spoon handle was in the “Goldilocks” zone in terms of size and shape: not too big, not too small, not too curvy, not too straight. Using this feedback I constructed the sixth and final (?) form — see the first image of this post.

The user test included a direct side-by-side comparison with existing dinnerware.

The user test included a direct side-by-side comparison with existing dinnerware.

Before developing these simplified paper prototypes, I also experimented with ways of making more three-dimensional forms that could be sent in the mail. While this novel approach showed some potential, I was concerned with how user error might complicate or (even worse) bias feedback. Still, these paper prototypes helped me to better understand and interpret the scale of my 3D models.

PaperPrototype_01.jpg

Final Thoughts

This project still feels somewhat incomplete. Perhaps this is because the generative design process itself can always demand further iteration, or maybe it is because I have not yet created a physical prototype that can actually be tested as an eating instrument. Maybe it is only because there were still a few “rogue droplets” (grrrrrr) that I simply could not keep contained with the completion of my sixth iteration. Whatever the net effect might be from these various shortcomings, I am pleased with the learning opportunities that were presented throughout this exploration of design.

Were I to continue with this process, the next steps would be to 3D print the latest shape using a food-safe material (there are a few third-party vendors that offer this service). I would then ship that latest design for further user evaluation. I believe that there are still many additional iterations necessary before I could defend having created something that satisfies the criteria I set out with this project (i.e., a spoon that overcomes the challenges of involuntary muscle movements and essential tremors).

If I were to collaborate with others, I would also want to evaluate the ecological and economic impact of such a device. How might we go about manufacturing to appropriate scale? How might additional user tests with a wider audience influence the existing form? There remains many unanswered questions and a newfound respect for the power of generative design.

Bugs in the Blender

I have continued to have luck exploring the Fluid simulations in Blender, but this process has not been without its quirks. I recently encountered a strange issue related to Particle Radius settings

Particle Radius

The radius of one liquid particle in grid cells units. This value describes how much area is covered by a particle and thus determines how much area around it can be considered as liquid. A greater radius will let particles cover more area. This will result in more grids cell being tagged as liquid instead of just being empty.

Whenever the simulation appears to leak or gain volume in an undesired, non physically accurate way it is a good idea to adjust this value. That is, when liquid seems to disappear this value needs to be increased. The inverse applies when too much liquid is being produced.

What does this look like in practice? My most recent simulation actually seems to produce fluid as the scene progresses.

Nevertheless, I was able to gain critical insights into this form and will continue to iterate new designs. This is being done in conjunction with paper prototyping. These forms are less sophisticated, but still provide valuable information about how users will experience and interact with this flatware.

SpoonPrototype.jpg

Spoonfuls of updates

This week was packed full of progress on multiple projects. I received feedback for my group’s birth control information app “MyGallery.” Our work was even featured on CMU’s Design page.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

I’ve continued to explore fluid simulations with Blender. I’ve ran into some technical hurdles: Blender 2.82 uses a variety of protocols to leverage GPUs for rendering and computation. It offers an AI-driven denoiser (Optix), CUDA path tracing, and OpenCL. My MacBook Pro has an AMD Radeon Pro 5500M GPU as well as the option to plug in a Radeon Frontier Edition (first generation Vega) eGPU on Thunderbolt 3. Plenty of GPU compute power in either configuration, but there is a snag: MacOS 10.15 (Catalina) has deprecated OpenCL in favor of Metal 2+. CUDA and Optix are proprietary to nVidia GPUs. Apple hasn’t shipped a Mac with nVidia GPUs since Kepler launched (GeForce 700 series). Blender supports AMD ProRender, but I found it was terribly unstable.

I could easily slip into a tangent about how unfortunate the breakup between Apple and nVidia truly is, but I will spare you.

My current workflow involves queuing some tasks to my desktop, running Windows 10. The GPUs are dual Radeon VIIs. Unfortunately, I found that rendering on Blender is unstable when both GPUs render in parallel. No problem, since I can free up the other GPU for Folding@Home (a hobby of mine that has exploded in response to COVID-19). Who would have guessed that a global pandemic would boost a distributed computing project to exascale?

Despite these obstacles of platform compatibility, I have made significant progress on my simulation-based research. It is difficult to understate how exciting this project has been for me. For some context: the ASCI Red supercomputer (at the Sandia National Laboratories) was built in 1996, and was the fastest supercomputer in the world until 2000. It was the first computer to achieve true terascale computing (one trillion floating point operations per second). I built my first terascale computer in 2013. This was shortly after leaving my job at Intel. There was something very gratifying about building a computer with a CPU I helped manufacture. GLaDOS G4 (you can see the project here, scroll down to “Everything Else”) was built with a GeForce GTX 780 GPU and Intel Core i7 4770k overclocked to 4.5 GHz. It ran nearly silent and fit inside an up cycled Apple Power Mac G4 (microATX equivalent) case.

The ASCI Red supercomputer was designed to simulate nuclear weapons tests. Today, I am using a system roughly ten times more powerful to simulate soup spilling out of a spoon. I was inspired to approach this problem by two projects. The first was a 2013 project from Portland State University (my alma mater) to make a coffee cup for zero-gravity environments. they used drop cages and 3D printing to iterate several designs until they had a shape that held liquid. “It wasn’t needed, but it was requested.”

The other project hit me right in the heart.

The S’up Spoon is the embodiment of good design. The design was inspired by deep empathy for a user’s problem, and the solution involves as little design as possible. There are few technologies in this world that we trust enough to put in our mouths. If you can make it in this space, you can make it (almost) anywhere. During the fall semester, Moira and I visited the Carnegie Museum of Art. They had an exhibition on accessibility design, and I was brought to tears by stories of innovation and vibrant improvements to quality of life for people with disabilities. Technology, at its very best empowers people to realize their fullest potential. We can easily get lost in the exhilaration of the complex, but this impulse must not dampen our ability to appreciate the elegance of simplicity. Some problems are best solved by form. I saw many incredible solutions in that exhibition, but this spoon has really stuck with me.

My goal is not to make something better, but perhaps a little bit different. The shape of the S’up spoon is intuitive, and if we had never seen a spoon before, we might conclude that it is the better design over more traditional forms. It is however, under our current cultural context, a strange thing to behold. It looks more like a wizard’s pipe or a warrior’s horn. It is beautiful and ergonomic. I do not intend to elevate those specifications. Instead, my goal is to make a spoon that is inconspicuous while still achieving similar results for users who suffer from motor movement difficulties.

How has my first design faired under simulation?

While I can certainly see the appeal of a long hollow channel, I’ve become increasingly concerned with how this shape my be difficult to keep clean. I can imagine objects getting wedged toward the back depending on what is being consumed. I have began to work on a second iteration with a more shallow channel. Still, this first iteration does fairly well. It is managing to retain most of the 15ml (i.e., 1 tablespoon) of fluid under rapid movement.

I enjoyed this simulation so much that decided to make a rendering:

I have not yet gotten back into Cinema 4D to evaluate RealFlow. Despite the challenges regarding compatibility, I am truly impressed with how powerful this open source software has become with this latest release.

Now that I have established this workflow, I can easily switch out revised designs to test under identical conditions. I’m still not sold on the current handle shape, and I think I can improve liquid retention by tweaking the angle of the lips. The flat bottom (Chinese style spoon) does fairly well, with it’s obtuse angle walls. Next, I will try a concave structure with a wider base for the handle and a more aggressive descending angle.

Prototyping – Part 2

Working with Blender has continued to go well.

I have also been looking at some of the existing solutions in this space:

KFS Easy Eat

http://www.eating-help.com

Liftware, by verily

https://www.liftware.com

EliSpoon

https://elispoon.com

Ornamin - Supportive Cutlery (Parkinson’s)

https://www.ornamin.co.uk/shop/cutlery-set?number=SW24

S’up Spoon

https://www.youtube.com/watch?v=C8nNlWw6KbA

Apex Medicine Spoon

https://www.riteaid.com/shop/apex-medicine-spoon-0233706

I have been sketching and studying these forms in consideration for my own designs.

Sketch.jpg
Sketch 1.jpg
Some shapes are unappealing because of their associations. These still deserve consideration, as they function well in this space.

Some shapes are unappealing because of their associations. These still deserve consideration, as they function well in this space.


This week I will begin iterating designs in Fusion 360. Hopefully, I will finally be able to make use of my RealFlow trial license. I’m curious to see how the “out of the box” settings function with these geometries.

Fluid Simulation in Blender

“Throw off your fears let your heart beat freely at the sign that a new time is born.” — Minnie Riperton

I’ve completed my workflow design for fluid dynamics testing in Blender. Here’s a proof of concept:

Now I just need to figure out RealFlow in C4D…

Interactive Design Prototyping

THE TIME HAS COME TO…PUSH THE BUTTON

Wireless communication between Arduino #1 and #2

Wireless communication between Arduino #1 and #2

My current project in IxD Prototyping involves physical computing (i.e., “interactive systems that can sense and respond to the world around them.”) I have worked with Arduino before (Restricted Area, 2017) but this newest project is expected to have a daily use. In my head, I keep a long list of annoying technology interactions—this gets updated frequently. We are saturated with unsatisfying technology and devices that cause more problems than they solve. We have inconveniences stacked upon inconveniences, and if we were to step outside of this environment, you would inevitably conclude that most electronics are made to punish the buyers. I am looking to improve just one such interaction.

Back in 2012 I bought an HD video projector. If you love to watch movies, there is something magical about having “the big screen” at home. I love it. Do you know what I don’t love? Using an infrared remote control on a devices that is mounted above and behind me. Seriously, Epson: what where you guys (and yes, I’m assuming it was a team of men, with their dumb penises getting in the way of common sense) thinking?! The primary function of the remote control is to simply turn the projector on and off. I would gladly give up the remote control entirely if I could simply move the power button to the armrest of my couch. Instead, I must contort my arm in Kama Sutra fashion just to find the right angle to get the sensor to recognize the POWER-ON command from the remote.

Getty Images: the various methods for turning on an Epson HD Projector.

Getty Images: the various methods for turning on an Epson HD Projector.

My girlfriend’s method to bypass the projector is more elegant: she retrieves a step-stool from our utility closet and presses the ON/OFF button on the projector chassis. This works well, but … well, let’s just say, it ruins the mood. I began to explore other options, and realized that the primary issue is that IR remotes are directional. The IR sensor is part of the assembly, and cannot be relocated. Arduino is capable of IR communication, it is also capable of RF communication. Radio frequency is far less dependent on line-of-sight, especially within the context of indoor and residential use. Imagine what WiFi would be like if it worked over infrared. Consider also that Apple abandoned their IR remote interface for the Mac.

Enter the Arduino

I found a few open source projects that utilize IR and RF communication:

https://learn.sparkfun.com/tutorials/ir-communication/all

https://www.electroschematics.com/ir-decoder-encoder-part-2-diy-38-khz-irtr-module/

https://create.arduino.cc/projecthub/electropeak/use-an-ir-remote-transmitter-and-receiver-with-arduino-1e6bc8

https://learn.adafruit.com/using-an-infrared-library/hardware-needed

https://www.sparkfun.com/datasheets/Components/nRF24L01_prelim_prod_spec_1_2.pdf (PDF Warning)

https://www.deviceplus.com/arduino/nrf24l01-rf-module-tutorial/

https://forum.arduino.cc/index.php?topic=421081.0

https://howtomechatronics.com/tutorials/arduino/arduino-wireless-communication-nrf24l01-tutorial/

All of these resources are excellent. I want to call attention to one more link: https://create.arduino.cc/projecthub/muhammad-aqib/nrf24l01-interfacing-with-arduino-wireless-communication-0c13d4

I have a bone to pick with this one. Take a look at the wiring diagram:

Diagram created by /u/Muhammadaqibdutt

Diagram created by /u/Muhammadaqibdutt


Note the LED pin-out for the receiver. This diagram shows the positive leg of the LED connecting to Pin 3

Now, lets take a look at the code:

SOURCE.png

The devil is in the details: “digitalWrite(6, HIGH)” condition turns the LED on. Pin 3 does nothing.

This made for some very “fun” troubleshooting. I’ve since ironed out all the kinks, and have successfully pirated the IR remote signal from an Epson brand projector (on loan from the Design Office at CMU), and have moved on to making an enclosure. Will I 3D print or laser cut? I have not yet decided.

Here is some sample code for my RF triggered IR emitter:

(NOTE: this code is just one half of the project, and by itself cannot do anything. You’ll also need IR and RF libraries to make this code work on your Arduino)

#include <SPI.h>
#include <nRF24L01.h>
#include <RF24.h>
#include <IRLibAll.h>
RF24 radio(9, 10); // CE, CSN
const byte address[6] = "00001";
boolean button_state = 0;
int led_pin = 3;
IRsend mySender;
void setup() {
  pinMode(6, OUTPUT);
  Serial.begin(9600);
  radio.begin();
  radio.openReadingPipe(0, address);   
  radio.setPALevel(RF24_PA_MIN);
  radio.startListening();
}
void loop()
{
  if (radio.available())
  {
    char text[32] = ""; 
    radio.read(&text, sizeof(text)); 
    radio.read(&button_state, sizeof(button_state));
    if (button_state == HIGH)
    {
      digitalWrite(6, HIGH);
      Serial.println(text);
      //Arduino Remote On/Off button code
      mySender.send(NEC, 0xffa25d);
    }
    else
    {
      digitalWrite(6, LOW);
      Serial.println(text);
    }
  }
  delay(5);
}

Evaluating Tools for Information Architecture

OmniGraffle for Mac

From the website:

OmniGraffle is a comprehensive, yet easy to use diagramming and drawing application. Drag and drop to create wireframes, flow charts, network diagrams, UI mockups, family trees, office layouts, and more. OmniGraffle 7 comes with plenty of features to get started in Standard. OmniGraffle Pro has everything in Standard, plus features suited specifically for folks that make a living designing or working with graphics—things like Shared Layers, Artboard Layers, Non-Destructive Shape Combinations, Blending Modes and Fill Effects, Visio support, SVG export, and more.

Weaknesses:

  • Price - even their educational license for students costs $89.99. They do offer a free trial, but it only works for 14 days

  • Compatibility - not easy to transfer projects to other platforms (i.e., Visio)

  • Learning curve - many reviews complain that it is difficult to learn how to use

xSort for Mac

From the website:

  • Visual environment simulating a table with cards (and outline view).

  • Supports open, semi-open and closed exercises.

  • Supports sub-groups (participants can put groups inside groups).

  • Control every aspect of the exercise(sorting type, cards placement, etc.).

  • Statistical results (cluster tree, distance table, etc.) updated in real time.

  • Displays individually all the info related to an individual session.

  • Easily select the sessions you want to use based on different criterias.

  • Create, read, print and export reports with a single click.

  • Lock the document so that a participant may do only one session.

  • Fully integrated with Mac (Intel and PowerPC-based Macs).

  • Price - Free

Weaknesses:

  • 32-bit only (does not work with latest version of MacOS

  • No support

  • Has not been updated in years

PowerMapper Desktop

From the website:

  • Platforms - Macintosh and Windows

  • Webcrawl - Automatically maps websites

  • Agnostic - Works in-browser and on the cloud

  • Light system requirements - works well on older computers

Weaknesses:

  • Price - $150 per license and no educational license is offered, updates require annual subscription of $37.25

  • Limited use - primarily designed for website analytics

Evaluating Tools for Interaction Design

From paper to digital

UXTools.co has some very useful information about design tools - and they break these down into specific tasks, such as:

Just one of many intuitive rankings for useful design tool categories

Just one of many intuitive rankings for useful design tool categories

Which tool is best for information architecture? I cannot say for sure. There are many, many, many tools for designers to choose from. Knowing which tool is best for a particular task can save time and money. Let’s look at three:

This vector drawing app is part of an entire suite of tools offered by Adobe

This vector drawing app is part of an entire suite of tools offered by Adobe

Adobe Illustrator 2020

Strengths:

  • Compatibility - part of an “ecosystem” it works seamlessly with other Adobe apps

  • Established standards - works with a variety of file types, and produces files that can be used with a variety of other apps

  • Maturity - with more than three decades of development, it is not likely to go away anytime soon

  • Updates - the software is frequently updated (with both new features and bug fixes)

Weaknesses:

  • Price - Adobe products have always been expensive, and every version of Illustrator since Adobe CS6 has been priced as a subscription, billed annually or monthly

  • Interactivity - does not support interactive features. Elements are static

  • Collaboration - does not support simultaneous editing

I do not have personal experience with this app (yet) but here’s what stackshare.io has to say:

I do not have personal experience with this app (yet) but here’s what stackshare.io has to say:

Figma

Strengths:

  • Collaboration - while both Figma and Illustrator offer vector-based graphic design tools, only Figma is capable of collaboration in real-time. Multiple users can tweak and edit the same file simultaneously.

  • Endless design file versioning - file versioning is considered a “best practice” when working on a project. With Illustrator, this is done manually (users must be “good citizens” and use the “save as” option, adding _Vxx to the end of their file names. Figma does this automatically, and embeds the changes into metadata

  • Platform agnostic - Figma runs in browser. You can switch between machines to continue working on a variety of platforms. Illustrator works with a variety of platforms (Windows, MacOS, and iOS), but each system requires a separate installation

  • Responsive UI - simple changes to graphics elements update in real-time

  • Prototyping - Illustrator can produce graphics, but it cannot produce interactive prototypes.

  • Handoff - prototypes can easily be handed off to web developers to be converted into fully-functional assets.

  • Price - it is free for students

Weakness:

  • Standardization - Illustrator is generally regarded as an industry standard, and it supports “legacy” project files. Figma is much more modern, but not as backward compatible.

  • No access to API - Illustrator users can program functions directly. This is especially useful when a project requires several repetitive tasks

  • Popularity - “According to the StackShare community, Adobe Illustrator has a broader approval, being mentioned in 80 company stacks & 57developers stacks; compared to Figma, which is listed in 60 company stacks and 54 developer stacks.” - stackshare.io

Adobe’s offering for designers who need to prototype for interaction

Adobe’s offering for designers who need to prototype for interaction

Adobe Xd 2020

Strengths:

  • Compatibility - part of an “ecosystem” it works seamlessly with other Adobe apps.

  • Prototyping - intuitive interface allows designers to rapidly “wire” their screens through a variety of triggers.

  • Large library - offers a wide variety of animations, transitions, and triggers.

  • Platform specific templates - includes built-in templates for quickly establishing a project format. Users can work from a variety of pre-baked device settings (iPhones, Android, Web, Desktop).

  • Updates - the software is frequently updated (with both new features and bug fixes).

  • Web-based sharing - prototypes can be shared and launched in browser. Works with Adobe Cloud

Weaknesses:

  • Price - Adobe products have always been expensive, priced as a subscription, billed annually or monthly

  • Limited multimedia abilities - while the graphics components are fairly robust, the sound features are extremely limited

  • Collaboration - does not support simultaneous editing

Which tool is right for evaluating information architecture?

I do not know. I have decided that I will work with Figma, because I believe that their list of features are compelling and complete enough for my first IxD prototype project this semester. Additionally, Figma has gained significant industry presence. Knowing how to use this software could be beneficial to a variety of future careers.