nasa / ohio space grant consortium 2005-2006 annual student

280
NASA / OHIO SPACE GRANT CONSORTIUM 2005-2006 ANNUAL STUDENT RESEARCH SYMPOSIUM PROCEEDINGS XIV A Vision of Yesterday, Today, and Tomorrow April 21, 2006 Held at the Ohio Aerospace Institute Cleveland, Ohio

Upload: trinhkhanh

Post on 01-Jan-2017

232 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: nasa / ohio space grant consortium 2005-2006 annual student

NASA / OHIO SPACE GRANT

CONSORTIUM

2005-2006 ANNUAL STUDENT RESEARCH SYMPOSIUM

PROCEEDINGS XIV

A Vision of Yesterday, Today, and Tomorrow

April 21, 2006 Held at the Ohio Aerospace Institute Cleveland, Ohio

Page 2: nasa / ohio space grant consortium 2005-2006 annual student

ii

TABLE OF CONTENTS Page(s)

Table of Contents.......................................................................................................................................ii-vi

Foreward ...................................................................................................................................................... vii

Member Institutions...................................................................................................................................viii

Acknowledgments......................................................................................................................................... ix

Agenda........................................................................................................................................................x-xi

Group Photograph (Scholars and Fellows) ............................................................................................... xii

Symposium Photographs ................................................................................................................... xiii-xxii

SCHOLAR AND FELLOW RESEARCH SUMMARY REPORTS

Student Name College/University Page(s)

Abdallah, Islam M. ...............................Central State University.............................................................. 1-2 Tensile Testing of Plastics Acres, Justin D. ....................................... University of Dayton................................................................ 3-5 A Study of Intelligent Agents in Interface Design Anzalone, Amanda J. .............................Cedarville University................................................................ 6-8 How Big Is Our Solar System? An Exploration of Proportion and Scale for Algebra I Arter, Joshua M. ................................... Wright State University............................................................ 9-10 Computational Study of Cathode Location in the Discharge Chamber of an Ion Engine Barbour, Charles W., II .......................University of Cincinnati ......................................................... 11-14 Composite Shape Memory Polymer Construction Barsi, Stephen ............................... Case Western Reserve University.................................................. 15-20 Zero Boil-Off Pressure Control of Cryogenic Storage Tanks Baughn, Leah I. ................................... The University of Akron ......................................................... 21-22 Dimensional Analysis of Tire Loading Bojanowski, Michael E. .............................Miami University............................................................... 23-26 Stainless Steel Braze Mandrel Failure Briggs, Maxwell H. ....................... Case Western Reserve University.................................................. 27-29 High Pressure Foil-Journal Bearing Characterization Castellucci, Matthew A. ......................Ohio Northern University ........................................................ 30-32 CFD Analysis of the S809 Wind Turbine Airfoil Ceesay, Sheriff Y. ..................................Wilberforce University .......................................................... 33-34 Microprocessors for Robotics Application Clark, Lauren A. ................................. The University of Akron ......................................................... 35-37 Space – Exploring the Planets

Page 3: nasa / ohio space grant consortium 2005-2006 annual student

iii

Student Name College/University Page(s)

Coatney, Denia R. ............................... The Ohio State University........................................................ 38-41 Moving Towards Lean in the Emergency Department of the University Hospital East Cooper, Ryan J. ................................... The University of Akron ......................................................... 42-44 Ceramic-Polymer Composite Bone Substitute Testing Corbett, Michael W. ............................. Wright State University.......................................................... 45-48 High Altitude Balloon Fight Path Prediction Crim, Amanda G. ...............................Cleveland State University ....................................................... 49-50 Ecology on Mars: Applying Knowledge of Ecological Relationships to Design a Biosphere on Mars Crunkleton, Justin R. .......................Youngstown State University ..................................................... 51-55 Experimental Investigation into the Effects of Velocity and Pressure on Coulomb Friction Davis, Elizabeth M. ............................. The University of Akron ......................................................... 56-59 Roller-ragious Davis, James G. ...................................Cleveland State University ....................................................... 60-61 Problem Based Learning in Mathematics Deshpande, Arati V. ........................... The Ohio State University........................................................ 62-63 Mechanical Breakaway System for Safety Verification on the Subject Load Device for the Enhanced Zero Gravity Locomotion Simulator Deyoe, Jeremy P. ................................Cleveland State University ....................................................... 64-65 Creating a Self-Sustaining Ecosystem Dodson, Christopher A. ............................. Ohio University................................................................ 66-72 Kinematics and Dynamics Analysis of NASA’s Robonaut Dolence, Eric B. ..................................Cleveland State University ....................................................... 73-74 Protection and Conversion Coatings Edmonds, Shavon J. P. ..........................Wilberforce University .......................................................... 75-80 Internet Measurements of Packet Reordering Ellis, Brandon J. ................................... Wright State University.......................................................... 81-83 Six Sigma and a Design of Experiments Flegel, Ashlie B. ...................................The University of Toledo......................................................... 84-85 Computational Study of Engine Performance Using Computer Aided Simulation Galbraith, Marshall C. .........................University of Cincinnati ......................................................... 86-92 Numerical Simulation of a Low Pressure Turbine Blade Employing Active Flow Control Gatica, Maria J. .................................Carnegie Mellon University ...................................................... 93-94 High-Pressure Liquid Chromatography Analysis of a Vapor/Mist Phase Lubricant Hehl, Eric L. .......................................Owens Community College....................................................... 95-96 Improving Nutrient Absorption in Zero Gravity

Page 4: nasa / ohio space grant consortium 2005-2006 annual student

iv

Student Name College/University Page(s)

Henness, Stacey A. ..................................Cedarville University............................................................ 97-99 Calcium Stores in Tetrahymena Thermophila Hlasko, Heather A. ....................... Case Western Reserve University.............................................. 100-106 Cone Penetrometer Equipped with Piezoelectric Sensors for Characterization of Lunar and Martian Soils Hurtuk, Therese M. ............................. The University of Akron ..................................................... 107-108 Analysis of Composite Materials in Spacecraft Using Green’s Function Huseman, Douglas K. ...........................University of Cincinnati ..................................................... 109-110 Pressure Attenuation in Pulse Detonation Combustors Jefferson, Maurice ................................Central State University...................................................... 111-112 Scientific Ballooning Applied to Atmospheric Temperature Analysis and Aerial Photography Johnson, Jennifer M. ..............................Cedarville University........................................................ 113-115 Newton’s 3 Laws of Motion Jones, Shannon A. ................................Central State University...................................................... 116-117 Design of a Controlled Environment Simulating the Extreme Temperatures of the Tropopause: A Test Bed for Thermal Analyses of BalloonSat Payloads Kenner, Naomi E. ...................................Cedarville University........................................................ 118-120 Protein Interactions in Osteoclast Differentiation Kish, Loretta B. ............................... Lakeland Community College................................................. 121-122 Comparing and Contrasting ICD-9-CM to ICD-10-CM Kocoloski, Matthew L. ........................... University of Dayton........................................................ 123-126 A Review of Energy Harvesting Potential Koester, Brandon D. ............................Ohio Northern University .................................................... 127-130 CFD Analysis of Flow Over a Model Rocket Lawrence, Charlita C. ..........................Central State University...................................................... 116-117 Design of a Controlled Environment Simulating the Extreme Temperatures of the Tropopause: A Test Bed for Thermal Analyses of BalloonSat Payloads Leary, Sarah A. .......................................Cedarville University........................................................ 131-132 Applications of Ellipses Lemon, Zachary S. .....................................Marietta College ........................................................... 133-134 Waterflooding Ohio’s Berea Sandstone Formation Lim, Lily ............................................... The University of Akron ..................................................... 135-142 Reliable Invasive Blood Pressure Measurements Using Fourier Optimization Techniques Llapa, José F. ......................................Cleveland State University ................................................... 143-144 Flexible-Joint Mechanism for Space Applications

Page 5: nasa / ohio space grant consortium 2005-2006 annual student

v

Student Name College/University Page(s)

Mackay, Allison S. ...............................Ohio Northern University .................................................... 145-146 Geometry and Rockets Meade, Wilbert E. ................................Central State University...................................................... 147-148 Microbial Degradation of Petroleum Hydrocarbons Miller, Evin L. ..................................... Terra Community College.................................................... 149-150 Electrodynamic Tethers for Space Propulsion Mitchell, Douglas A. ........................... The Ohio State University.................................................... 151-154 Control of High Speed Cavity Flow Using Plasma Actuators Moore, Derrick, Jr. ....................... Case Western Reserve University.............................................. 155-156 Assurance Technology Center (ATC) Mulcahey, Heather N. ..........................University of Cincinnati ..................................................... 157-160 Structural Analysis of HALE Aircraft Wing Design Orra, Mike ...........................................The University of Toledo..................................................... 161-168 A Neural Network Based State of Charge Predictor for Lithium Ion Battery Cells Plano, Susan B. ..................................... Wright State University...................................................... 169-173 Helmet-Mounted Display (HMD) Interface Design for Head-Up Display (HUD) Replacement Raffio, Gregory S. ................................... University of Dayton........................................................ 174-180 Design of Net Zero Energy Campus Residence Reisberger, Turner K. ................................Marietta College ........................................................... 181-183 Coalbed Methane Potential in Southeast Ohio Rios, Jeffrey N. .............................. Case Western Reserve University.............................................. 184-185 Vapor Phase Catalytic Ammonia Converter (VPCAR) LabView Programming Roepcke, Frederick C. .........................The University of Toledo..................................................... 186-187 Chemical and Mechanical Stability of Membranes Modified by Ion Beam Irradiation Rutkowski, Adam J. ..................... Case Western Reserve University.............................................. 188-194 Aerial Odor Tracking in Three Dimensions Scavuzzo, Joseph J. ............................. The University of Akron ..................................................... 195-196 Potato Projectile Motion Schilling, Walter W., Jr. .....................The University of Toledo..................................................... 197-204 Modeling the Reliability of Existing Software Using Static Analysis Sell, Paul H. ..........................................The University of Toledo..................................................... 205-206 The Determination of Dust Opacities Using Color Asymmetries in Inclined Galaxies Sheldon, Bradley J. .............................. The University of Akron ..................................................... 207-210 Engine and Generator Efficiency Analysis

Page 6: nasa / ohio space grant consortium 2005-2006 annual student

vi

Student Name College/University Page(s)

Sibbitt, Bethany G. .................................Cedarville University........................................................ 211-212 The Role of p38 in Bone Modeling Siwo, Japheth Thomas ..........................Wilberforce University ...................................................... 213-214 Nanotechnology: The Impact on Business and Society Smearcheck, Mark A. ................................ Ohio University............................................................ 215-217 Obstacle Detection and Avoidance Methods Implemented via LiDAR for Synthetic Vision Navigation Systems Snyder, Robert M. .............................. The Ohio State University.................................................... 218-219 Mixing Control in High Speed Jets Using Plasma Actuators Stiles, Justin A. .....................................Ohio Northern University .................................................... 220-223 High Albedo Concrete Pavements for Sustainable Design Strudthoff, Bud L. ................................University of Cincinnati ............................................................ 224 Teaching Resources Easier to Find Thorndike, Elizabeth M. ..................Youngstown State University ................................................. 225-227 Impact of Inquiry Teaching Strategies Upon Student Learning Tran, Henry ...............................................Miami University........................................................... 228-229 Dynamics of CVT-Based Hybrid Vehicles Tutor, William B. ..............................Youngstown State University ................................................. 229-230 Experiment Validation of a Precision Gear Pump Van Vliet, Emily M. ................................Cedarville University........................................................ 232-235 Neutron Stability Derived from an Electrodynamic Model of Elementary Particles Venable, Don T. .......................................... Ohio University............................................................ 236-238 AirBorne Laser Scanner Feature Extraction Vogel, Elisa M. .....................................The University of Toledo..................................................... 239-240 Carbon Nanofiber Composites for Reverse Osmosis Vogt, Kimberly J. ................................... University of Dayton........................................................ 241-244 High School Anatomy and Physiology - Students Investigate Human Physiology in Space to Increase Their Understanding of the Human Cardiovascular System Wehrum, Kathryn D. ..........................Ohio Northern University .................................................... 245-246 The 7xxx Series Aluminum Alloy for Aircraft Structures Wirick, Brian J. .................................... Wright State University...................................................... 247-250 Passive Radar Coverage Analysis Using Matlab Wright, J. Rose ..................................... Wright State University...................................................... 251-252 Inquiry-based and Discovery Learning in Mathematics Yoshikawa, Chad O. .............................University of Cincinnati ..................................................... 253-258 Load Balancing Network Streams

Page 7: nasa / ohio space grant consortium 2005-2006 annual student

vii

FOREWARD The Ohio Space Grant Consortium (OSGC) is one of 52 Consortia nationally. The Consortia comprise the National Space Grant College and Fellowship Program which is Congressionally mandated and administered by NASA Headquarters. The objective of the Space Grant is to serve as a national asset, contributing significantly to the areas of aeronautics, space science and technology research, education, and public service. One of the major components of the Space Grant program is to provide Space Grant Scholarships and Fellowships to U. S. citizens studying in aerospace-related disciplines at affiliate universities. Since 1989, more than $4.1 million in financial support has been awarded to approximately 443 undergraduate scholars and 147 graduate fellows working toward degrees in Science, Technology, Engineering, and Mathematics (STEM) disciplines at participating universities. As an enhancement to their studies, students must engage in Aerospace-related projects. On Friday, April 21, 2006, all OSGC Scholars and Fellows reported on these projects at the Fourteenth Annual Student Research Project Symposium held at the Ohio Aerospace Institute in Cleveland, Ohio. In eight different sessions, Fellows and Senior Scholars offered 15-minute oral presentations on their research projects and fielded questions from an audience of their peers and faculty, and received written critiques from a panel of evaluators. Junior, Community College, Education, and Bridge Scholars presented posters of their research and entertained questions from all attendees during the afternoon poster session. All students were awarded Certificates of Recognition during the closing awards ceremony. Research reports of Space Grant Fellows, Senior, Junior, Community College, Education, and Bridge Scholars from the following schools are contained in this publication:

Affiliate Members Participating Universities • The University of Akron • Cedarville University • Case Western Reserve University • Marietta College • Central State University • Miami University • Cleveland State University • Ohio Northern University • University of Dayton • Youngstown State University • The Ohio State University • Ohio University Community Colleges • University of Cincinnati • Lakeland Community College • The University of Toledo • Owens Community College • Wilberforce University • Terra Community College • Wright State University

Page 8: nasa / ohio space grant consortium 2005-2006 annual student

viii

MEMBER INSTITUTIONS Affiliate Members Campus Representative • Air Force Institute of Technology............................................Dr. Michael L. Heil • Case Western Reserve University ........................Dr. James D. McGuffin-Cawley • Central State University ..................................................... Dr. Gerald T. Noel, Sr. • Cleveland State University................................................... Dr. Bahman Ghorashi • Ohio University ......................................................................... Dr. Roger Radcliff • The Ohio State University.........................................................Dr. Füsun Özgüner • The University of Akron ............................................................... Dr. Paul C. Lam • University of Cincinnati ............................................................. Dr. Gary L. Slater • University of Dayton .............................................................. Dr. Donald L. Moon • The University of Toledo ................................................... Dr. Kenneth J. De Witt • Wilberforce University............................................................ Dr. Edward Asikele • Wright State University.................................................................Dr. Mitch Wolff Participating Institutions Campus Representative • Cedarville University...................................................... Professor Charles Allport • Marietta College ..............................................................Dr. Benjamin H. Thomas • Miami University...............................................................Dr. Osama M. Ettouney • Ohio Northern University........................................................ Dr. Jed E. Marquart • Youngstown State University........................................................ Dr. Hazel Marie Community Colleges Campus Representative • Columbus State Community College................................................Dr. John Marr • Cuyahoga Community College .....................Dr. Jacqueline A. Joseph-Silverstein • Lakeland Community College..............................................Dr. Frederick W. Law • Lorain County Community College.............................. Dr. George Pillainayagam • Owens Community College ....................................................... Dr. Paul V. Unger • Terra Community College....................................................... Dr. James Bighouse Government Liaisons Representative • NASA Glenn Research Center ............................................ Dr. M. David Kankam ............................................................................................... Mr. Robert F. LaSalvia .................................................................................................... Ms. Dovie E. Lacey • NASA Headquarters.................................................................... Dr. Larry Cooper • Air Force Research Laboratory ................................... Ms. Kathleen Schweinfurth ...................................................................................................Ms. Kathleen Levine Host Institution Representative • Ohio Aerospace Institute .......................................................Ms. Ann O. Heyward

Page 9: nasa / ohio space grant consortium 2005-2006 annual student

ix

ACKNOWLEDGMENTS Dr. Kenneth J. DeWitt, Director, Ohio Space Grant Consortium (OSGC),

Dr. Gerald T. Noel, Associate Director, OSGC, and Ms. Laura A. Stacko,

Program Manager, OSGC, wish to extend a thank you to the following evaluators

for their time, their expertise, their support, and most importantly, for the

inspiration and encouragement they offered to the Ohio Space Grant Scholars and

Fellows during the student presentations on April 21, 2006.

• Edward Asikele, Wilberforce University • Kulbinder Banger, Ohio Aerospace Institute • Martin Cala, Youngstown State University • David C. Freeman, Marietta College • Dong Shik-Kim, The University of Toledo • Hazel Marie, Youngstown State University • Roger Radcliff, Ohio University • Rickey J. Shyne, NASA Glenn Research Center • Benjamin H. Thomas, Marietta College Funding for Ohio Space Grant Scholarships and Fellowships is provided by the

National NASA Space Grant College and Fellowship Program, Ohio Aerospace

Institute, The TRW Foundation, and the participating Ohio colleges, universities,

and community colleges.

Special thanks go out to the following individuals:

• William R. Seelbach and the Ohio Aerospace Institute for hosting the event and also welcoming the attendees to OAI and his leadership comments.

• Richard S. Christiansen, NASA Glenn Research Center, for his inspiring words.

• Ann O. Heyward, Ohio Aerospace Institute, for assisting in the presentation of certificates, and all that she does for the Ohio Space Grant Consortium.

• Joseph Kolecki, NASA Glenn Research Center, for his motivating luncheon speech.

• Ohio Aerospace Institute staff whose assistance made the event a huge success!

- Mark Cline - Dave Haring - Keisha James - Gary Leidy - Christopher Lloyd

- Ila Pearl - Fred Reid - Joyce Robertson - Richard Spratt

Page 10: nasa / ohio space grant consortium 2005-2006 annual student

x

2006 OSGC Student Research Symposium Hosted By: Ohio Aerospace Institute (OAI)

22800 Cedar Point Road • Cleveland, OH 44142 • (440) 962-3000 Friday, April 21, 2006

AGENDA

8:00 AM – 8:30 AM Sign-In / Continental Breakfast / Portraits.......................................................... Lobby

8:30 AM Welcome – Dr. Kenneth J. De Witt, Director, OSGC........................................ Forum Dr. Gerald T. Noel, Sr., Associate Director, OSGC........................................... Forum

8:30 AM – 8:40 AM Leadership Comments, William R. Seelbach, OAI President & CEO ............... Forum

8:40 AM – 8:50 AM Leadership Comments, Richard S. Christiansen ................................................ Forum Deputy Director, NASA Glenn Research Center

8:50 AM – 9:00 AM Break

9:00 AM – 10:15 AM Oral Presentations - Senior Scholars and Fellows Break-out Sessions 1-4 ................................................................. See Schedule Below

10:15 AM – 10:30 AM Break – Juice, coffee, and soda will be available. ................................ 2nd Floor Area

10:30 AM – 11:45 AM Break-out Sessions 5-8 ................................................................. See Schedule Below

11:45 AM – 12:30 PM Lunch.............................................................................................................. Sunroom

12:30 PM – 1:15 PM Joseph Kolecki, NASA Glenn Research Center............................................. Sunroom "Exploration of Mars?”

1:15 PM – 1:35 PM Poster Presentations............................................................................................ Lobby Junior, Community College, Education, and Bridge Scholars

1:35 PM Group Photo..........................................................................................Lobby / Atrium

1:45 PM – 2:30 PM Presentation of Certificates and...................................................................... Sunroom Distribution of Presentation Evaluations

2:30 PM Formal Symposium Adjourns

2:35 PM – 4:00 PM Tour of NASA Glenn Research Center* ...............Meet Outside By Front Entry Door *Vans will transport tour participants over to the Center and return to OAI.

There are eight group sessions with four student presentations (Senior Scholars and Fellows) in Session 1 and four student presentations in Session 2. Evaluators will be present in each of the Parallel Sessions. At the end of each presentation, students will entertain questions from the evaluators and other attendees at the Session.

Session 1 – 9:00 AM – 10:15 AM Session 2 – 10:30 AM – 11:45 AM

• Group 1 ................................... President’s Room • Group 5................................ President’s Room • Group 2 ................................... Federal Room • Group 6................................ Federal Room • Group 3 ................................... Forum • Group 7................................ Forum • Group 4 ................................... Industry Room • Group 8................................ Industry Room

Evaluation Criteria Professional educators and researchers will be on hand to provide you with positive feedback on your oral presentations. They will be considering the following elements:

Technical content and quality of research Logic of research method used Relevancy and Practicality of the research Delivery of oral presentation

Page 11: nasa / ohio space grant consortium 2005-2006 annual student

xi

SESSION 1 – 9:00 AM to 10:15 AM

Group 1 – Electrical Engineering

PRESIDENT’S ROOM (LOWER LEVEL)

Group 2 – Mech. Eng./Ind. & Systems Eng.

FEDERAL ROOM (2ND FLOOR) Evaluators: Roger Radcliff and Rickey J. Shyne Evaluators: Martin Cala and Hazel Marie

9:00 Emily Van Vliet, Senior, Cedarville 9:15 Don Venable, Senior, Ohio University 9:30 Brian Wirick, Senior, Wright State 9:45 Walter Schilling, Jr., PhD 2, Toledo

9:00 Maxwell Briggs, Senior, Case Western 9:15 Justin Crunkleton, Senior, Youngstown State 9:30 Bradley Sheldon, Senior, Akron 9:45 Denia Coatney, MS 1, Ohio State 10:00 Gregory Raffio, MS 1, Dayton

Group 3 – Computer Engineering Computer Science/Computer Information Systems

FORUM (LOBBY LEVEL - AUDITORIUM)

Group 4 – Bio./Biomedical Eng./Chemical Eng./ Petroleum Engineering

INDUSTRY ROOM (2ND FLOOR)

Evaluators: Edward Asikele and Kulbinder Banger

Evaluators: Dong-Shik Kim, Dave Freeman, and Benjamin Thomas

9:00 Justin Acres, Senior, Dayton 9:15 Tamela Jones, Senior, Wilberforce 9:30 J. Thomas Siwo, Senior, Wilberforce 9:45 Mark Smearcheck, Senior, Ohio University 10:00 Chad Yoshikawa, PhD 3, Cincinnati

9:00 Naomi Kenner, Senior, Cedarville 9:15 Eric Dolence, Senior, Cleveland State 9:30 Turner Reisberger, Senior, Marietta College 9:45 Lily Lim, MS 2, Akron

SESSION 2 – 10:30 AM to 11:45 AM

Group 5 – Manufacturing Eng./Civil Engineering

PRESIDENT’S ROOM (LOWER LEVEL)

Group 6 – Mechanical Engineering

FEDERAL ROOM (2ND FLOOR) Evaluators: Edward Asikele, Dave Freeman,

and Roger Radcliff

Evaluators: Rickey J. Shyne and Hazel Marie

10:30 Maurice Jefferson, Senior, Central State 10:45 Theresa Hurtuk, Senior, Akron 11:00 Justin Stiles, Senior, Ohio Northern 11:15 Heather Hlasko, PhD 1, Case Western

10:30 Michael Corbett, Senior, Wright State 10:45 Christopher Dodson, Senior, Ohio University 11:00 Matthew Kocoloski, Senior, Dayton 11:15 Heather Mulcahey, Senior, Cincinnati 11:30 Adam Rutkowski, PhD 1, Case Western

Group 7 – Mechanical Engineering

FORUM (LOBBY LEVEL - AUDITORIUM)

Group 8 – Aero. Eng./Electrical Eng./Eng.

INDUSTRY ROOM (2ND FLOOR)

Evaluators: Martin Cala and Kulbinder Banger

Evaluators: Dong-Shik Kim and Benjamin Thomas

10:30 Brandon Koester, Senior, Ohio Northern 10:45 Douglas Mitchell, Senior, Ohio State 11:00 Bryan Pelley, Senior, Dayton 11:15 Stephen Barsi, PhD 2, Case

10:30 Charles Barbour, Senior, Cincinnati 10:45 Sheriff Ceesay, Senior, Wilberforce 11:00 Mike Orra, MS 2, Toledo 11:15 Susan Plano, PhD 3, Wright State 11:30 Marshall Galbraith, Senior, Cincinnati

Page 12: nasa / ohio space grant consortium 2005-2006 annual student

xii

2006 GROUP PHOTO

(Scholars, Fellows, Campus Representatives, and Advisors)

The OSGC credits Sharon Mitchell, Photographer, and Mark Cline, OAI, for taking pictures throughout the Symposium.

Page 13: nasa / ohio space grant consortium 2005-2006 annual student

xiii

SYMPOSIUM PHOTOGRAPHS - 2006

Following are photographs of Graduate Fellows and Senior Scholars who presented individual 15-minute oral presentations on their research in two morning sessions.

William R. Seelbach, Ohio Aerospace Institute President and CEO, opens the OSGC Student Research Symposium activities by welcoming all of the attendees to OAI.

Richard S. Christiansen, Deputy Director, NASA Glenn Research Center, motivates this year’s Symposium attendees.

Gerald T. Noel, Sr., Associate Director, OSGC, welcomes everyone and describes the activities of the day’s events.

Mark Smearcheck, Senior, Ohio University, explains his project, “Obstacle Detection and Avoidance Methods Implemented via LiDAR for Synthetic Vision Navigation Systems.”

Page 14: nasa / ohio space grant consortium 2005-2006 annual student

xiv

Denia Coatney, Fellow, The Ohio State University, presents her research entitled, “Moving Towards Lean in the Emergency Department of the University Hospital East.”

Maxwell Briggs, Senior, Case Western Reserve University, discusses his project, “High Pressure Foil-Journal Bearing Characterization.”

Naomi Kenner, Senior, Cedarville University, shares her research entitled, “Protein Interactions in Osteoclast Differentiation.”

Turner Reisburger, Senior, Marietta College, explains his research project entitled, “Coalbed Methane Potential in Southeast Ohio.”

Thomas Siwo, Senior, Wilberforce University, discusses his project, “Nanotechnology: The Impact on Business and Society.”

Page 15: nasa / ohio space grant consortium 2005-2006 annual student

xv

Luncheon attendees enjoy listening to Joseph C. Kolecki, Physicist, NASA Glenn Research Center, and his inspiring talk on the “Exploration of Mars?”

Justin Acres, Senior, University of Dayton, showcases his research entitled, “A Study of Intelligent Agents in Interface Design.”

Douglas Mitchell, Senior, The Ohio State University, explains his project entitled, “Control of High Speed Cavity Flow Using Plasma Actuators.”

Stephen Barsi, Fellow, Case Western Reserve University, shares his research project, “Zero Boil-Off Pressure Control of Cryogenic Storage Tanks.”

Justin Crunkleton, Senior, Youngstown State University, explains his project entitled, “Experimental Investigation into the Effects of Velocity and Pressure on Coulomb Friction.”

Emily Van Vliet, Senior, Cedarville University, presents her project, “Neutron Stability Derived from an Electrodynamic Model of Elementary Particles.”

Page 16: nasa / ohio space grant consortium 2005-2006 annual student

xvi

Following are photographs of Junior, Community College, Education, and Bridge Scholars who presented their research projects during the afternoon Poster Session:

William Tutor, Youngstown Scholar Shavon Edmonds, Wilberforce Scholar

Central State Scholar Wilbert Meade Robert Snyder, Ohio State Scholar

Jeremy Deyoe (right), Cleveland State, Fellow Scholars Jennifer Johnson (left) discusses his research with and Sarah Leary (right) from Dr. Augustus Morris (left) Central State. Cedarville University

James Davis, Cleveland State Scholar Matt Castellucci, Ohio Northern Scholar

UC Scholar Charles Barbour (left) Kim Vogt, U Dayton Scholar, explains and Dr. Gary Slater (right), UC her research to Dr. Marty Cala, YSU.

Page 17: nasa / ohio space grant consortium 2005-2006 annual student

xvii

UA Scholar Libbi Davis (center) shows Jeremy Deyoe (center), CSU, discusses her research to fellow UA Scholars his research with Joe Kolecki (left) Lauren Clark and Joey Scavuzzo. NASA Glenn, and Ken De Witt (right) OSGC Director.

Central State Scholars Shannon Jones Evin Miller, Scholar from (right) and Charlita Lawrence (left) Terra Community College

Kimberly Vogt, U Dayton Scholar Islam Abdallah, Cental State Scholar

Matt Castellucci (center), ONU, shows Naomi Kenner (right) Cedarville, his research to Dr. Hazel Marie (left), talking with Loretta Kish (left), YSU, and Stephen Barsi (right) Case. LCC, and Andrew Kish (center).

Rose Wright, Scholar from WSU U Toledo Scholar Frederick Roepcke

Page 18: nasa / ohio space grant consortium 2005-2006 annual student

xviii

Fellow Bridge Scholars from CWRU: Henry Tran (right), Miami Scholar, Jeffrey Rios (left) and explains his research to Derrick Moore (right) Mike Corbett, WSU Scholar.

Representing Cedarville (left to right): Representing U Akron (left to right): Stacey Henness, Bethany Sibbitt, Joey Scavuzzo, Paul Lam, Chuck Allport, and Amanda Anzalone Lauren Clark, and Libbi Marshall

Amanda Anzalone, Cedarville Scholar; Owens CC Scholar Eric Hehl future Scholar Kevin Gulley (far left) with his Advisor, Tekla Madaras

UT Scholar Fred Roepcke explains his Loretta Kish, Lakeland CC Scholar, research to Dr. Dong-Shik Kim, UT. and husband, Andrew Kish.

WSU Scholar Joshua Arter (left) shows Cedarville Scholars with their posters: his project to Adam Rutkowski, Stacey Henness (left) Fellow from Case Western. and Bethany Sibbitt (right)

Page 19: nasa / ohio space grant consortium 2005-2006 annual student

xix

Following are photographs of students receiving their Certificates of Recognition at the afternoon Awards Ceremony:

Gerald T. Noel, Sr., Associate Director, OSGC, congratulates Lauren Clark, The University of Akron Education Scholar, as she receives her Certificate of Recognition while Ken De Witt (center), OSGC Director, and Joe Kolecki (right), NASA Glenn Research Center, look on.

Ann O. Heyward, Vice President of Workforce Enhancement at OAI, congratulates Mike Corbett, Scholar from Wright State University, as he receives his Certificate of Recognition.

Joe Kolecki congratulates Mike Orra, Fellow, The University of Toledo, while Laura Stacko (left) and Ken De Witt (center) share in his award.

Joe Kolecki congratulates Christopher Dodson, Scholar, Ohio University, while Gerald Noel, Sr., (left) and Ken De Witt (center) look on.

Ken De Witt, OSGC Director, congratulates Denia Coatney, Fellow, The Ohio State University, along with Laura Stacko (left), Gerald Noel, Sr., (center) and Joe Kolecki (right).

Amanda Crim, Education Scholar, Cleveland State University, receives her Certificate of Recognition as Gerald Noel, Sr., (left), Ken De Witt (center) and Joe Kolecki (right) also share in her award.

Page 20: nasa / ohio space grant consortium 2005-2006 annual student

xx

Ken De Witt (center) and Joe Kolecki (right) congratulate Charles Barbour, II (left) Scholar from the University of Cincinnati.

Loretta Kish, Scholar from Lakeland Community College, receives her Certificate of Recognition from Joe Kolecki while Laura Stacko, Gerald Noel, Sr., and Ken De Witt look on (featured from left to right).

Joe Kolecki congratulates Lily Lim, Fellow, The University of Akron, while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in her celebration.

Zachary Lemon, Scholar from Marietta College, receives his Certificate of Recognition from Joe Kolecki while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his excitement.

Joe Kolecki congratulates Charlita Lawrence, Central State University Scholar, while Gerald Noel, Sr., (left) and Ken De Witt (center) look on.

Adam Rutkowski, Fellow from Case Western Reserve University, receives his Certificate of Recognition from Ken De Witt while Laura Stacko, Gerald Noel, Sr., and Joe Kolecki also share in his award (featured from left to right).

Page 21: nasa / ohio space grant consortium 2005-2006 annual student

xxi

Joe Kolecki congratulates Derrick Moore, Bridge Scholar from Case Western Reserve University. Gerald Noel, Sr., and Ken De Witt are also pictured (from left to right).

Maxwell Briggs, Scholar from Case Western Reserve University, receives his Certificate of Recognition from Ken De Witt while Gerald Noel, Sr., (left) and Joe Kolecki (right) look on.

Joe Kolecki congratulates Joshua Arter, Scholar, Wright State University, while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his celebration.

Sheriff Ceesay, Scholar, Wilberforce University, receives his Certificate of Recognition from Joe Kolecki while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his excitement.

Joe Kolecki congratulates Ashlie Flegel, Scholar from The University of Toledo while Gerald Noel, Sr., (left) and Ken De Witt (center) look on.

Islam Abdallah, Scholar from Central State University, receives his Certificate of Recognition from Ken De Witt while Gerald Noel, Sr., (left) and Joe Kolecki (right) also share in his award.

Page 22: nasa / ohio space grant consortium 2005-2006 annual student

xxii

Joe Kolecki congratulates Eric Dolence, Scholar from Cleveland State University. Gerald Noel, Sr., and Ken De Witt are also featured (from left to right).

Maria Gatica, Bridge Scholar from Carnegie Mellon University, receives her Certificate of Recognition from Joe Kolecki while Ken De Witt looks on.

Joe Kolecki congratulates Wilbert Meade, Scholar, Central State University, while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his celebration.

Matthew Kocoloski, Scholar from the University of Dayton, receives his Certificate of Recognition from Joe Kolecki while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his excitement.

Ken De Witt congratulates Libbi Marshall, Education Scholar from The University of Akron, while Gerald Noel, Sr., (left) and Joe Kolecki (right) look on.

Eric Hehl, Scholar from Owens Community College, is congratulated by Joe Kolecki while Gerald Noel, Sr., (left) and Ken De Witt (center) also share in his award.

Page 23: nasa / ohio space grant consortium 2005-2006 annual student

Tensile Testing of Plastics

Student Researcher: Islam M. Abdallah

Advisor: Mr. John Sasson

Central State University Manufacturing Engineering Department

Abstract Today’s concern with space travel and aircrafts, as well as global travel, is fuel consumption. Gas prices have sky rocketed, making travel on the ground and in outer space seem much harder. A major factor for the increase in fuel consumption is in the weight of the vehicle. The common materials used in space aircrafts are metals and ceramics. The advantage of metals is that they are extremely tough, but the disadvantages are that their tolerance to heat is fairly low when compared with ceramics, and they are very dense, making them heavy. Ceramics on the other hand are lightweight and have an excellent tolerance to heat, yet are extremely brittle, with the slightest impact shattering the ceramic. This brings me to my project, the use of plastics in space travel. Plastics have been used in cars to replace steel parts that make the car heavy. Also plastics have been proven to be as tough as metals, and with additives, can me made to withstand moderate amounts of heat. My project will test polystyrene samples of various shapes, and see with one has the highest tensile strength. The second part will concern molding the polystyrene parts with various additives, and once again testing them to see with one has the highest tensile strength. Project Objective To manufacture tensile test specimens under various conditions such as temperature, time, and pressure, and determine which specimens under the given settings withstand the highest of forces. There will be 2 sample designs made of polystyrene, the “dog bone” and the “rod”. Four samples will be made of each type and design setting. The Tensile test values will help determine which geometry is stronger in which applications that need a product that withstands tensile forces. Methodology Used The 3 settings I will use to create the part will be temperature, pressure, and time. My goal is to manipulate the variables to obtain the strongest and toughest piece. There will be 2 dog bones and 2 rod specimens made for each setting. Temperature: 1 = High (450) 0 = Low (405) Pressure: 1 = High (800) 0 = Low (600) Time: 1 = High (3) 0 = Low (2) The specimens will then be tested using the tensile test machine. The tensile tests of each group and type of specimens will be compared to find out which one would be better suited for applications with tensile forces. Results Obtained The results were the opposite of what I had initially expected. The dog bone samples in some cases had almost four times the tensile strength than that of its rod shaped counterparts. Some of the observations have shown that the dog bone shapes stretched much longer than that of the rod shaped specimens, with the rod almost snapping instantly at the sign of fatigue. My observation as to why the dog bone specimens withstood more force was due to its geometry. Since it’s geometry is curved, the dog bone can withstand more forces along a curve.

1

Page 24: nasa / ohio space grant consortium 2005-2006 annual student

Significance of Results The significance of these results show that dog bone shaped specimens are much better suited to handle tensile forces in various applications as opposed to rod shaped specimens that break easily. The dog bone shaped specimens can be made out of different and stronger plastics (such as ABS) to suit some of the more demanding space travel applications. Another benefit would be that since these parts are much lighter than steel, they can replace steel to make the space shuttle lighter and more fuel efficient. Charts and Graphs Average Tensile Strength (Dog Bone)

0

500

1000

1500

2000

2500

Tensile Strength (psi)

Group

Tensile Strength

Tensile Strength 2404 2010 2012 2018 2014 2008 1996 2020

0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1

Acknowledgments and References I would like to begin by saying thank you to the Central State University students and faculty, who helped me greatly with this project. More specifically, I’d like to thank Mr. Mel Shirk, our lab technician, for helping me injection mold the parts as well as insight on how to effectively run this experiment. I would also like to thank Mr. John Sasson, my advisor, for giving me ideas for an effective experiment as well as finding resources one the mechanical qualities of polystyrene. Last but not least, I would like to thank Dr. Abayomi Ajayi-Majebi for the long hours he spent with me debugging and fixing the tensile test machine. Average Tensile Strength (Rod)

0

100

200

300

400

500

600

700

800

900

1000

Tensile Strength (psi)

Group

Tensile Strength

Tensile Strength 602.5 605 602.5 747.5 902.5 880 877.5 915

0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1

Refrences http://www.texasfoam.com/technical-data.htm, “Styrofoam - Expanded Polystyrene”, 3/31/2006. http://www.gotogmg.com/Pages/injection_molding.htm, “Plastic Injection Molding” 4/05/2006. http://www.immnet.com/articles?article=1992, “By Design: Polystyrene Part Design” 4/6/2006.

2

Page 25: nasa / ohio space grant consortium 2005-2006 annual student

A Study of Intelligent Agents in Interface Design

Student Researcher: Justin D. Acres

Advisor: Dr. Waleed Smari

University of Dayton Department of Electrical and Computer Engineering

Abstract Interface Design is an important part of the software development process. Constructing interfaces that are easy to use and functional is a challenge but continuously improved by industry in personal computers (PC) so a greater percentage of users have a more positive experience. This challenge applies to both a traditional PC experience and applications in proprietary applications, such as radios and information management screens in military applications. Being able to convey information quickly in critical response and time sensitive software is a top priority as a success measurement of the graphical user interface (GUI). Having this information too late or in a format that is not easy to use for the operator is an issue, potentially large. The importance of a useable interface is highlighted strongest in these applications but also applicable to casual computing machinery users. Diversity of computer users has increased the importance of having accessible interfaces that can adapt tasks to those that are hard of hearing, sight, or other important faculty. Operating systems provide limited resources to allow these users the ability to experience technology the way that other able-bodied and able-skilled people have. The importance of interface design is highlighted in this situation as a useable interface is a critical component of the disabled having access to computing technology. Human Computer Interaction (HCI) is the discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use [1]. The easy part of computing machinery design is the computer: it is well understood with accurate measurements of performance easy to collect and analyze. However, a strong but sometimes less concentrated part of computing is the human user. Humans are much more difficult to measure and analyze highlighted by conflicting research about the most accurate form of psychological assessment. Computing machinery can be designed to specifications but ultimately all humans are individuals with no two having the same experience of the world. That said, most user interfaces present the same information is the same manner and format to all users, regardless of these accepted differences between users. Project Objectives For this project I will analyze the correlation between personality and interface design with future connections to artificial intelligence agents use correlated with personality typing. The project focuses on one point of the Myers-Briggs Type Indicator (MBTI) and basic GUIs built to isolate this measurement. The results of this project will show an initial view into the applicability of personality typing-dependant applications where a user would enter a personality type to receive a more efficient and likeable user interface. The groundwork will apply to multiple areas of software engineering where certain applications can be designed for the target audience, ie most users are of a certain personality type thus the GUI designers should optimize for this type. The next step is to apply one or more agents to “learn” what a user’s application interface habits are to create a more customized environment. This idea is similar to the expansion of toolbar menus in Microsoft Office applications where infrequently used options are hidden, thus making it a user-specific setup without requiring manual customization. This agent(s) could apply data about users to everyday applications, creating satisfaction and efficiency at a level greater than mass-market software applications available today. With greater customization the number and difficulty of computing tasks undertaken by users has the ability to increase, creating greater yet efficiency.

3

Page 26: nasa / ohio space grant consortium 2005-2006 annual student

Methodology Used This report outlines a study based upon the S-N duality, the second component of the MBTI personality typing evaluation. The S-N duality is the second of four factors that create sixteen distinct personality types, each one exhibiting common characteristics. The “S” represents Sensing – paying attention to what is actual, present, physical, and real. S types pay strong attention to the facts that are present and often miss possibilities that are not explicitly stated. The “N” represents Intuition – paying attention to the meanings and patterns of information of information received. N types often remember events or choices as an impression of the situation, not the actual facts that exist. [3] Of the MBTI personality typing measures, the S-N duality was isolated as the first the study for the specific GUI designs in figures 1-3 to illustrate differences. S-N provides a good opportunity to discern whether the information received through a computing application is preferred to be long and explicit or shorter and based on patterns. The theory is that an S-Sensing type will be more apt to like every detail presented in every screen, thus more information presented. The N-Intuition type is theorized to pay attention to the previous interactions with a GUI and require abbreviated detail as the user continues to become familiar with the program and the way that it functions (which is based upon the software engineering method and developers used to create the program). Reliability of tests is an issue that is controlled via Hays’ thesis on reliability and validity. He theorizes that all mental tests are subject to a certain degree of unreliability – the variance based upon the occasion on which the test is administered. This is considered as eTX += where X is the score obtained, T is the true score, and e is the error component. The reliability of a test then follows as:

Reliability of the test = 22

2

eT

T

σσσ+

[2]

This formula is taken into consideration when using human subjects to test the S-N duality, thus removing some (but not all) of the variance between applications of the same test by applying the test three times.

-Does not taint user opinion by having a preselected option

-Enables a “process of elimination” strategy by selecting several options then deselecting the undesired

-All options explicitly stated

Figure 1. Checkbox window.

-One in five chance that no action is required (default already chosen)

-Preselection taints user choice -All options explicitly stated

Figure 2. Selection button window.

4

Page 27: nasa / ohio space grant consortium 2005-2006 annual student

-One in five chance that no action is required (default already chosen) -Options abbreviated based on pattern commonality (common introductory words)

Figure 3. Modified selection button window. Results Obtained The results of this project provide valuable information. Of the personality tests performed, the S user indicated more interest in always having all the information available, including figures 1 and 2. There was not a defined preference between the two, just that all the information was displayed. The N user preferred figure 3 in situations where information was based upon connections, for example commonalities in the beginning of the situation, “more efficient.” Significance and Interpretation of Results This simulation of a personality-based GUI system is intended to represent a more complex system, although it suffers from several issues. One of the challenges in HCI of creating a personality-based system is many layers of subjectivity. A couple of many inaccuracies could be the personality type determined, the specific times chosen to study, or the content of the test material given. Overall, the project proved to be a study to suggest the furthering of independent agents pursing the same goal of interface customization. The promise that personality typing holds for the future of artificial intelligence and software engineering interface design is great. An agent may be able to bypass much of the difficulty of a human observing subjectively and develop a more quantitative observation method. Acknowledgments I would like to thank my advisor, Dr. Waleed Smari, and the University of Dayton for support on this project. References [1] http://sigchi.org/cdg/cdg2.html#2_1 [2] Hays, William L. Quantification in Psychology. Brooks/Cole Publishing Company: Belmont, CA, 1967. pg. 66-67. [3] http://www.myersbriggs.org/my_mbti_personality_type/mbti_basics/sensing_or_intuition.asp Head, Allison J. Design Wise: A Guide for Evaluating the Interface Design of Information Resources. Information Today, Inc.: Medford, NJ, 1999. Horrocks, Ian. Constructing the User Interface with Statecharts. Addison-Wesley: Harlow, England, 1999. Thimbleby, Harold. User Interface Design. Addison-Wesley Publishing Company: Workingham, England, 1990. Thomas, Peter J. The Social and Interactional Dimensions of Human-Computer Interfaces. Cambridge University Press: New York, NY, 1995.

5

Page 28: nasa / ohio space grant consortium 2005-2006 annual student

How Big Is Our Solar System? An Exploration of Proportion and Scale for Algebra I

Student Researcher: Amanda J. Anzalone

Advisor: Dr. Kevin Roper

Cedarville University

Department of Science and Mathematics Description One of the challenges of an algebra class is to find concrete examples that will allow students to practice the skills they are learning, work on developing strategies for solving problems, and provide context for an often abstract subject. In this lesson - which is part of a unit on fractional equations, proportions, and percents - students use proportional reasoning to find the dimensions of planets and other bodies in two scale models of the solar system. Given actual dimensions and a scale, students work with a partner to find the dimensions of the models. After these calculations, students were asked to interpret their results through a variety of short-answer questions aimed at higher-level thinking. Level: Grades 8-9, Algebra I Lesson Objectives

• Students shall use proportions to solve problems involving scale models. • Students shall relate the relative sizes of bodies in the solar system, and distances between them,

to the relative sizes and distances of familiar objects. Alignment with Ohio Content Standards Students shall . . .

• Use scientific notation to express large and small numbers. • Apply proportional reasoning to solve problems involving missing lengths. • Estimate, compute, and solve problems involving real numbers and scientific notation. • Apply mathematical knowledge and skill in other content areas.

Resources/Materials Worksheets (attached), white drawing paper, crayons or colored pencils, calculators. You may also want to have a ping-pong ball or dime on hand. Students will also need scratch paper to work on and possibly to attach additional answers. Another nice visual to have would be lithographs of the solar system. Connections to Previously Learned Material I used this lesson during a unit on algebraic ratios, proportions, and percents. We first spent a chapter studying skills, in which students practiced what they were learning in exercises. After this, we touched all the same topics again, looking this time at applications and problem solving strategies involving those skills. This lesson fit into the second half of the unit. The day before we did this activity, we did a lesson on using proportions in population sampling. Because of this, students were already familiar with how to set up and solve a proportion, so I did not need to spend much time on those skills. Procedures

1.) Briefly review the process of setting up a ratio, the definition of proportion, and how to solve a proportion. You could use the ratio of girls to boys in your class to get an estimate for the ratio of girls to boys in the school. (This is also a good time to get an informal evaluation of last night’s homework.) Use this time to introduce the concept of scale. Students should be familiar with this term already from middle school mathematics.

2.) Tell students that we are going to be doing an activity today related to scale models of the solar system. Explain that there are three sections. In the first, they will be finding the dimensions of a model where the Earth is the size of a ping-pong ball. In the second, the sun is the size of a dime

6

Page 29: nasa / ohio space grant consortium 2005-2006 annual student

and the distance from the earth to the sun is about two yards. For each of these two, they will need to find proportional dimensions for several other distances in the solar system, and write these dimensions into a chart. After filling the chart out, there are several short answer questions related to that particular model. After finishing both models, students will have the choice of doing one of two additional activities. Make sure there are no questions regarding what the students are to do.

3.) Give students 30 seconds to find a partner they would like to work with, or assign partners. 4.) Pass out worksheets to each of the pairs. Students will each be turning in their own worksheet,

but only one drawing for the group (if they decide to draw.) 5.) The majority of the period (about 30 minutes for a 45 minute class) should be spent working on

the activity. Walk around the room to be sure that students are on-task, and to answer any questions that may arise.

6.) Wrap-up the class by collecting worksheets from any students who are finished and asking students to share what they learned today.

Assessment and Conclusions I did not assess this lesson on its own, beyond grading the calculations, reading the responses, and looking at the students’ drawings. However, I did assess the skills being used in a chapter test given a few days after the activity. I had also given a test over the same topics about a week and a half before the activity. The tables below are comparisons of those two tests: Before After

Mean Grade for Chapter 72% 87%

Median Letter Grade C B

Mode Letter Grade D A

Number of students answering proportion questions correctly

36 of 45 43 of 45

The results suggest that going through the chapter a second time, using word problems and applications to help students practice skills, was extremely beneficial. This would appear to support the idea that students learn better when dealing with skills or concepts in their real-world context, rather than with rote drills and basic exercises. However, it could also be that the mere act of studying topics a second time did as much to bring the grades up as the format did. This provided students with an opportunity to get further clarification of issues they were uncertain of, extra practice, and a second explanation. It should also be noted that there were necessary differences in the formats of the two tests. Although most students will tell you that word problems are much more difficult than exercises, there is also more partial credit given and in the number of questions asked, both of which can greatly affect grades. Therefore, it may be unfair to compare the two tests. Sources Educational Background: Daniels and Zemelman. Subjects Matter. Portsmouth: Heinemann, 2004. Eggen and Kauchak. Educational Psychology: Windows on the Classroom. Upper Saddle River: Pearson

Prentice Hall, 2004. NASA: Solar system dimensions from http://solarsystem.nasa.gov/planets/index.cfm Idea from http://media.nasaexplores.com/lessons/04-213/5-8_2.pdf Other Sources: http://www.robbinstabletennis.com/oruball.htm - diameter of ping-pong ball

7

Page 30: nasa / ohio space grant consortium 2005-2006 annual student

How Big Is Our Solar System? Names: _______________________ I. Imagine that the earth is the size of a regulation ping-pong ball. How big would these other bodies be? Use proportions to find the answers. Body Diameter (km) Model Diameter (cm) Sun 1.4 x 106

Earth

12,756 4

Moon

3,476

Mars

6,794

Jupiter

139,822

Saturn

120,536

Pluto

2,274

1.) If earth were the size of a ping pong ball, would Saturn fit into a regulation basket ball hoop (diameter 46 cm)? Would Jupiter? Would the Sun? 2.) If you were actually building this model, using a ping pong ball for earth, what might you use for some of the other planets? Names: _______________________ II. Imagine that the sun were the size of a dime. In this model, the earth would be 2 yards from the sun. Find these other distances. Bodies Distance (km)

Model Distance (yards)

Earth to Moon

384,000

Earth to Sun 1.5 x 108

2

Sun to Mars

2.28 x 108

Sun to Pluto

5.906 x 109

3.) Would the solar system in this model fit onto a football field? Why or why not? 4.) What surprised you most about what you learned today? III. Do one of the following two activities: A.) With your partner, draw a picture illustrating something you learned in this lesson. Label your drawing if necessary. The picture should be neat, colored, and any labels must be legible. B.) Using the sun=dime model, find the sizes of the planets in part I.

8

Page 31: nasa / ohio space grant consortium 2005-2006 annual student

Computational Study of Cathode Location in the Discharge Chamber of an Ion Engine

Student Researcher: Joshua M. Arter

Advisor: Dr. James Menart

Wright State University Mechanical Engineering Department

Abstract Ion Engines are used in various applications such as satellite station keeping and for long range space missions. Efficiency is a primary concern to increase the velocity of the spacecraft for a small amount of fuel. To maintain the efficiency in an Ion Engine, the confinement of primary electrons is very important. Several factors contributing to the confinement of the primary electrons is the placement of ring cusp magnets and also the location of the cathode. Rather than experimentally testing various configurations of magnet placement and cathode location, computers can be used to model such configurations. A computer code, PRIMA, is used in this work to model cathode location in the magnetic field of a discharge chamber of an ion engine. The criterion for optimal location of the cathode is the configuration that keeps the primary electrons in the discharge chamber for the longest period of time. Project Objectives For the discharge chamber of an ion engine to operate efficiently, the confinement of primary electrons is crucial.1 The confinement of the primary electrons is produced by creating a magnetic field in the discharge chamber. Ring cusp permanent magnets are then used to create this magnetic field. The purpose of the primary electrons is to produce ions and then electrostatic acceleration of ions produce thrust. The general concept is that the longer the electrons are confined in the discharge chamber the more likely they are to produce ions. The intent of this study is to optimize the cathode location inside the discharge chamber to produce the most efficient confinement of primary electrons. The optimum confinement of primary electrons is determined by the length of the path that the electrons travel while in the discharge chamber. 4,6 This path is called the confinement length of the electrons. Also, the absorption of the electrons into the walls equates into lost ionization in the chamber. This study will consist of using computer modeling to produce the average confinement lengths corresponding to various configurations with various cathode locations. The use of computers in this study is much more cost effective than experimentally testing a large number of configurations as there is a maximum 18% difference between computational results and experimental results using the software PRIMA. Methodology Used Two different computer programs were used to model the discharge chamber of the ion engine. The first program being used is MAXWELL 2D2, which was used to model the magnetic field with the ring magnets. There are two magnetic rings that are at right angles to one another, the first being oriented on the back wall of the discharge chamber and the second oriented around the sidewall of the discharge chamber. MAXWELL 2D also was used to assign materials to the components. The walls of the discharge chamber are made of aluminum and the magnets were rare earth samarium cobalt magnets. Next the ion engine model was input into the second program, PRIMA, which determines effect of the magnetic field on the trajectory of the primary electrons. 3,6 PRIMA is basically a particle-in-cell code that tracks the primary electron throughout the discharge chamber. The primary electron is the principal ionization particle and PRIMA elastic primary electron collisions. The primary results obtained from PRIMA, are the confinement length and the relative number density. Seven cases with non-dimensional cathode locations of, 0.07, 0.21, 0.35, 0.5, 0.64, and 0.78 were chosen to be evaluated. The average confinement length of each cathode location was determined from the results. Different cathode locations provide different trajectory patterns within the magnetic field of the discharge chamber. If a shorter path is obtained then full ionization will not be obtained resulting in lower efficiency.6

9

Page 32: nasa / ohio space grant consortium 2005-2006 annual student

Results Obtained In this work, a survey of the location of the emissions from the cathode in a discharge chamber of an ion engine was investigated. This survey was taken to find the optimum location of the cathode resulting in the longest confinement length of the primary electrons. The comparison of the cathode locations against the confinement length are shown in Figure 1.

Confinement Length

0.91

0.920.93

0.940.95

0.96

0.970.98

0.991

1.01

0.07 0.21 0.35 0.5 0.64

Non-Dimensional Cathode Location

Norm

aliz

ed A

vera

ge C

onfin

emen

t Le

ngth

Confinement Length

Figure 1. Comparison of Cathode Locations vs. Confinement Length.

The optimum non dimensional confinement length, as indicated in Figure 1, is at a location of 0.35. This holds true because the primary electron stays in the discharge chamber the longest time before being absorbed into the walls. In Figures 2 and 3 the trajectory and relative primary electron density for a non- dimensional cathode location of 0.35 are shown. The trajectory plot shows the path of one primary electron throughout the discharge chamber with respect to the magnetic field. As shown in this plot, the primary electron was kept away from the walls resulting in the longest confinement length. The relative primary electron number density plot also shows that electrons do not reach the anode biased walls easily.

Figure 2. Relative Primary Electron Density with Non-Dimensional Cathode Location of 0.35.

Figure 3. Trajectory Path of Primary Electron with Non-Dimensional Cathode Location of 0.35.

References 1Kaufman, H. R., “Operation of broad-beam sources,” Commonwealth Scientific Corporation, Alexandria, Virginia, p.12, 1984. 2Ansoft Corporation, Maxwell 2D web site, http://www.ansoft.com/products/em/max2d/, 2005. 3Arakwa, Y., and Yamada, T., “Monte Carlo simulations of primary electron motions in cusped discharge chambers,”International Electric Propulsion Conference, AAIA-90-2654, 1990. 4Deshpande, S. S., “Computer study of primary electrons in the cusp region of an ion engine’s discharge chamber,” AAIA Paper 2004-4109, 2004. 5Maghalingam, Sudhakar, “Primary electron modeling in the discharge chamber of an ion engine,” Master Thesis, Wright State University, 2002. 6Deshpande, S. S., Ogunjobi, T., and Menart, J., “Computational study of magnet placement on the discharge chamber of an ion engine,” AAIA Paper 2005-4254, 2005.

Cathode Cathode

10

Page 33: nasa / ohio space grant consortium 2005-2006 annual student

Composite Shape Memory Polymer Construction

Student Researcher: Charles W. Barbour, II

Advisor: Dr. Jandro Abot

University of Cincinnati Aerospace Engineering

Abstract Material thermo mechanical behavior is a vital component in the design of a device. The material’s ability to take mechanical loads, its resistance to environmental wear, and its magnetic and electrical properties are all factors used in the development of engineered systems. SMP are smart materials that have large strain recovery when exposed to specific stimuli. Manufacturing is relatively simple since it is a polymer, this leads to larger availability and cheaper cost of the material. A way to improve an SMP is to create a composite such that a core material adds its characteristics of ductility to the SMP’s ability of strain recovery. This project investigates the structural behavior of composite shape memory polymers. In particular, focusing on the shape memory polymer (SMP) Veriflex, produced by Cornerstone Research Group (CRG). The SMP is used as the matrix for carbon fiber plain weave composite material. Project Objectives Shape memory materials are materials that change their geometric shape as a response to external stimuli. There are various types of stimuli such as electrical, magnetic and radiation. Thermally triggered materials are known as thermoresponsive shape memory materials, as shown in Figure 1. After straining the material, the original or pre-programmed shape or a fraction of the shape can be obtained by applying the proper stimulus to the device. Figure 2 shows the stress–strain curves in a thermomechanical test with the typical cycle segments labeled. Depending on the material composition and programming regime, a shape memory material can possibly conform to more than one shape. The process of only being capable of recovering one “permanent” shape is known as one-way shape memory effect, studied by Chang and Read in 1951 for gold-cadmium alloy, shape memory alloy (SMA).[1] This thermoresponsive material exhibited deformation recovery of up to 8%.[1] Gall and Dunn indicate that strain recovery for SMAs and shape memory ceramics (SMC) are about 10% and 1% respectively, with shape memory polymers able to obtain near perfect recovery.[2] This project seeks to provide a consistent and reproducible composite that can then be characterized. Particularly, the shape memory polymer material from Cornerstone Research Group (CRG) is used as the matrix for the composite with carbon fiber plain weave. The stress-strain and temperature properties are the focus of this research since this material is activated at a certain temperature. Methodology The first step of the process is to create shape memory polymer matrix samples. The matrix sample is the polymer alone with no carbon fiber reinforcement. Veriflex by CRG comes in two separate liquid parts, a resin and hardener type system. Once the mixture is created, the material is placed in a thin rectangular mold approximately 10mm by 100mm. This mold is then inserted into a vacuum bag and subject to a curing cycle. Two curing cycles are then used to determine how best to make consistent samples, both obtained from CRG. This utilizes a heat press or oven connected to a control system for temperature rate change and temperature duration. In the case of the heat press, load/pressure is also controlled. This provides a cycle time of around 8 hours, when the heat rate and pressure are controlled precisely. The other approach is with a longer cycle time. This method uses a heat oven at a static temperature of 75° C for 24 hours. Next is the creation of shape memory composite. The fiber material investigated is carbon fiber plain weave. This composite is created using three methods: closed mold heat press, vacuum mold heat press (Figure 4), closed mold heat oven (Figure 3). All methods utilize a wet lay-up system, where the SMP material is spread over the fibers in atmosphere and then placed into a vacuum bag with a separator/bleeder combination and sealed under pressure. The closed mold systems pull vacuum then

11

Page 34: nasa / ohio space grant consortium 2005-2006 annual student

release the pump but maintain the vacuum in the bag. The vacuum mold system pulls constant vacuum throughout the process. Characterizing the shape memory polymer matrix samples is the fourth step in this project. Once a consistent sample set can be obtained the material is subjected to several tests. Strain gages and taps are attached to the samples and then placed into a tensile loading stage. This device applies tensile load and sends the information to a computer to calculate the stress-strain curve for the sample. This is done for both a temporary as well was permanent state of the SMP. The final step is to characterize shape memory composite. Once a consistent sample set can be obtained, the composite material is cut into appropriate sample sizes subjected to the same tests as the SMP material. Results Obtained Due to the difficulty in obtaining consistent results and time constraints, the research project did not proceed much beyond the material production stage of the project. Had the project had more time to complete, the shape memory composite would have been characterized as stated in the methodology. Figures

Figure 1. Transition from the temporary shape (spiral) to the permanent shape (rod) for a shape-memory network that has been synthesized from poly dimethacrylate and

butylacrylate. The switching temperature of this polymer is 46°C. The recovery process takes 35 s after heating to 70°C.[1]

12

Page 35: nasa / ohio space grant consortium 2005-2006 annual student

Figure 2. Stress–strain curves in the thermomechanical test.[2]

Figure 3. Heat oven with sample curing inside

Figure 4. Heat press with sample curing inside

Figure 5. [Left] SMP, [Right] Composite SMP

13

Page 36: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments Special thanks and consideration for several members of the University of Cincinnati’s faculty is needed. Including, Dr. Gary Slater for the introduction to Dr. Jandro Abot and for the opportunity to participate in the Ohio Space Grant Consortium. For Dr. Jandro Abot in providing guidance and good research advice. Reference [1] Lendlein, A., Kelch, S., “Shape Memory Polymers,” Angew. Chem. Int. Ed, 2002. [2] Gall, K., Dunn, M., Shape memory polymer nanocomposites, Acta Materialia, vol. 50 pg. 5115-5126,

2002. [3] Tobushi, H., Matsui, R., The influence of shape-holding conditions on shape recovery of

polyurethane-shapememory polymer foams, INSTITUTE OF PHYSICS PUBLISHING - SMART MATERIALS AND STRUCTURES, 2004.

[4] <http://www.crgrp.net/index.php>

14

Page 37: nasa / ohio space grant consortium 2005-2006 annual student

Zero Boil-Off Pressure Control of Cryogenic Storage Tanks

Student Researcher: Stephen Barsi

Advisors: Dr. Iwan Alexander and Dr. Mohammad Kassemi

Case Western Reserve University Mechanical and Aerospace Engineering Department

Abstract Recent studies suggest that Zero Boil-Off (ZBO) technologies, aimed at controlling the pressure inside cryogenic storage tanks, will play a prominent role in meeting NASA’s future exploration goals. Small-scale experiments combined with validated and verified computational models can be used to optimize and then to scale up any future ZBO design. Since shortcomings in previous experiments make validating comprehensive two-phase flow models difficult at best; the Zero Boil-Off Tank (ZBOT) experiment has been proposed to fly aboard the International Space Station. In this paper, a numerical model has been developed to examine several pressure control strategies within the ZBOT test matrix. Specifically, the four strategies include axial liquid jet mixing, mixing provided by a sub-cooled liquid jet, bulk liquid cooling provided by a cold-finger and cold-finger cooling with axial jet mixing. Results indicate that over the time scales under consideration, sub-cooled liquid jet mixing is the most effective means to reduce tank pressure. Project Objectives Affordable and efficient cryogenic storage for use in propellant systems is essential to meeting NASA’s future exploration goals1. Cryogen mass loss occurs when heat leaks into the tank from the surrounding environment. When heat enters the tank, warmer fluid will be carried to the liquid vapor interface by natural convection. As the warmer fluid reaches the interface, evaporation will occur, resulting in vapor compression and a subsequent rise in tank pressure. A strategy, which has been developed to control tank pressure, is a zero boil-off system where a combination of forced liquid mixing and active cooling is employed. Optimizing any future ZBO system requires careful consideration of various mixing and cooling concepts and their effects on the underlying transport processes in the liquid and vapor. A flight-like demonstration with an actual cryogen is essential to understanding all of the complicated interactions involved in controlling tank pressure. Unfortunately, performing these tests can be costly and time-consuming. An alternative approach to optimizing a ZBO system combines small-scale model fluid experiments in both 1g and low g environments with detailed computational modeling. Data from the small-scale fluid experiments can be used to validate, verify, and refine comprehensive two-phase numerical and analytical models. Once validated, the CFD models, along with the experimental data, will be used to scale up the ZBO design to a cryogenic flight system. The objective of this research then, is to develop a comprehensive two-phase flow numerical model that can be used to evaluate different pressure control strategies. In particular, the numerical model will be used to analyze four cases in the test matrix of the low gravity Zero Boil-Off Tank (ZBOT) experiment. Methodology To solve this problem numerically, a finite volume model for the problem was developed. In this primitive variable formulation, all variables are defined at the cell centers. The standard Rhie and Chow correction2 is used to define mass fluxes at the cell faces. Moreover, corrections due to grid non-orthogonality are also employed when computing cell-face fluxes. The SIMPLE pressure correction method3 is used to update the pressure field. The scheme is nominally 2nd order accurate employing a combination of upwinding and central differencing in a deferred correction approach to spatial derivatives with 2nd order multi-level time integration. The coupled system of equations is solved sequentially using Stone’s semi-implicit procedure4. For a general overview of the numerical methodology employed for this problem see Ferziger and Peric5.

15

Page 38: nasa / ohio space grant consortium 2005-2006 annual student

In order to solve the present vaporization problem, a lumped energy and lumped mass model of the vapor is coupled to the field equations in the liquid6. To compute the solution, first the field equations are solved in the liquid region. Once the temperature field is known, the heat flux on the liquid side of the interface can be computed. This heat flux can be integrated over the interfacial area yielding the net heat power entering the vapor from the liquid side of the interface. The net heat power entering the vapor is used by the lumped model to evolve the vapor pressure in time. A new vapor pressure, however, imposes a new saturation temperature along the interface, which can change the heat flux on the liquid side of the interface. Thus iteration is required to converge to a solution. Results In order to evaluate the four pressure control strategies, first a tank partially full of HFE-7000 is allowed to self-pressurize under a constant heat load of 0.5 W uniformly distributed over two strip heaters affixed to the outer wall of the tank. HFE-7000 is a low boiling point refrigerant and is one of the candidate test fluids defined by the ZBOT experiment. The volumetric liquid fill fraction is 95% and the spherical vapor bubble is fixed at the end of the tank opposite the direction of the residual gravity vector. As shown in Figure 1, heat has entered the tank wall near where heat is being supplied by the strip heaters. Natural convection currents carry this heat up along the tank wall towards the interface. Once this heat reaches the interface, the liquid will begin to vaporize, the vapor will be compressed and the tank pressure will increase. The four pressure control strategies analyzed here all begin after self-pressurizing the tank for 12 hours. The heaters remain active during the 2 hours the mixer and/or cooler are on. In the first case study, we attempt to reduce tank pressure by disrupting the thermal stratification in the liquid with an axial jet mixer with an average speed of 0.1 cm/s. In this case, the axial liquid jet only mixes the bulk liquid; no energy is removed between the tank outlet and jet inlet. In order to reach the interface, the upwardly traveling mixing jet must overcome the buoyancy-induced flow that is pulling fluid down along the centerline of the tank. However, since the average jet speed in the present case is several orders of magnitude greater than the largest natural convection speed, the liquid jet easily overcomes the opposing buoyant flow. When the colder fluid, which has settled at the bottom of the tank during the self-pressurization period, reaches the interface, condensation will begin and, the vapor pressure will decrease. Isotherms in Figure 2 suggest that the re-circulating jet flow is pulling warmer fluid from the bottom heater radially inward. When this warmer fluid gets entrained with the jet and gets carried to the interface, evaporation will occur and the vapor pressure will begin to rise again. While the axial jet mixer does an adequate job at de-stratifying the liquid, since no energy has been removed from the tank, the vapor pressure will eventually increase. The short-term drop in pressure is a result of bringing the cooler fluid that has settled to the bottom of the tank up towards the interface. To sustain this pressure reduction, energy must be removed from the system. For the second case study, sub-cooled jet mixing is used to control the tank pressure. Once again, an axial liquid jet enters the tank with an average speed of 0.1 cm/s. Here, it is assumed an efficient heat exchanger that exists outside of the computational domain can remove enough heat from the liquid so that the incoming jet temperature is maintained at a constant 293 K. In addition to competing with buoyancy-induced flows in the tank, a cold-jet rising into a warmer fluid must also overcome the negative buoyancy of the jet itself before the cooler fluid can reach the interface. Once again though, buoyancy effects are reduced in microgravity and the jet’s momentum easily carries fluid up towards the interface. As shown in Figure 3, once the cooler fluid reaches the interface, the vapor will start condensing and the vapor pressure will rapidly drop. From a power consumption standpoint, it may be impractical to continuously run a jet mixer. Also reliability concerns may force the cryogenic community to consider separating the active cooling mechanism from axial jet mixing. So, for the next case study, the jet mixer is turned off and a cold-finger ring submerged in the liquid is used to provide active cooling. In this case, since the axial jet mixer remains off, any mixing in the liquid is a result of natural convection. As shown in Figure 4, the cold fluid remains localized to the vicinity around the cold-finger. The cooler fluid actually sinks towards the bottom of the tank away from the interface. It sinks for two reasons. First, the cooler fluid sinks under its

16

Page 39: nasa / ohio space grant consortium 2005-2006 annual student

own buoyancy. Second, the upper convective vortex is driving a flow up along the tank wall, along the interface and down along the tank’s centerline. The combined effect is that the cold fluid remains in the bottom half of the tank and never reaches the interface. As such, condensation never begins, and the vapor pressure continues to rise. Thus, over the two hour mixing/cooling period in the ZBOT experiment, cold-finger cooling alone is not an effective means to reduce tank pressure. To enhance the cooling effect of the cold-finger, for our final case study, we combine cold-finger cooling with axial jet mixing. Once again, no energy is removed between the tank outlet and jet inlet. The axial jet is only used to mix the bulk liquid. As time progresses, heat conducts into the cold-finger and cooler fluid gets entrained with the incoming liquid stream. Once this cooler fluid reaches the interface the vapor will begin to condense and the vapor pressure will decrease as shown in Figure 5. It should be noted that the diameter of the cold finger ring is a very important design parameter. If the diameter of the ring is too small, the jet flow will be blocked from reaching the interface and jet enhancement of cold-finger cooling will be reduced. Similarly, if the diameter of the ring is too large, cold fluid around the cold-finger may not become entrained in the incoming liquid jet and cold-finger cooling with axial jet mixing will not be as effective. Conclusions Recent studies suggest that Zero Boil-Off (ZBO) technologies, aimed at controlling the pressure inside cryogenic storage tanks, will play a prominent role in meeting NASA’s future exploration goals. Small-scale experiments combined with validated and verified computational models can be used to optimize and then to scale up and future ZBO design. Since shortcomings in previous experiments make validating comprehensive two-phase flow models difficult at best; the Zero Boil-Off Tank (ZBOT) experiment has been proposed to fly aboard the International Space Station. In this paper, a numerical model was developed to examine several points in the ZBOT test matrix. First an axial liquid jet was used to mix the bulk liquid. Results indicated that after an initial drop in tank pressure, a mixing jet alone is not enough to further reduce tank pressure. Further reductions must be accompanied by removing energy from the system. The introduction of a sub-cooled liquid jet and cold-finger cooling are two strategies under consideration to remove energy from the system and reduce tank pressure. It was shown that over the time scales under consideration, sub-cooled jet mixing is an effective way to control tank pressure but cold-finger cooling is not. To enhance the effect of the cold-finger, an axial mixing jet was introduced into the tank while the cold-finger was active. While pressure reduction was enhanced with the addition of the mixing jet, sub-cooled jet mixing was found to be the most effective strategy under consideration to rapidly reduce tank pressure.

17

Page 40: nasa / ohio space grant consortium 2005-2006 annual student

Figures

Figure 1. Isotherms and streamtrace after self-pressurizing for 12 hours.

Figure 2. Axial jet mixing solution after 2 hours of mixing.

18

Page 41: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Sub-cooled jet mixing solution after 2 hours of mixing and cooling.

Figure 4. Cold-finger solution after 2 hours of cold-finger operation.

19

Page 42: nasa / ohio space grant consortium 2005-2006 annual student

Figure 5. Cold-finger + axial jet mixing solution after 2 hours or operation.

References 1. L. J. Salerno and P. Kittel, Cryogenics and the human exploration of mars. Cryogenics, 39:381-388,

1999. 2. C. M. Rhie and W. L. Chow, A numerical study of the turbulent flow past an isolated airfoil with

trailing edge separation. AIAA J., 21:1525-1532, 1983. 3. S. V. Patankar. Numerical Heat Transfer and Fluid Flow. McGraw-Hill, 1980. 4. H. L. Stone. Iterative solution of implicit approximations of multidimensional partial differential

equations. SIAM J. Numerical Analysis, 5:530-558, 1968. 5. J. H. Ferziger and M. Peric Computational Methods for Fluid Dynamics. Springer, 3rd edition, 2002. 6. C. H. Panzarella and M. Kassemi, On the validity of purely thermodynamic descriptions of two phase

cryogenic fluid storage. J. Fluid Mech., 484:41-68, 2003.

20

Page 43: nasa / ohio space grant consortium 2005-2006 annual student

Dimensional Analysis of Tire Loading

Student Researcher: Leah I. Baughn

Advisor: Dr. Ping Yi

The University of Akron Civil Engineering

Abstract Developing a new concept for weigh stations has become a priority in the transportation field. The current weigh stations in use are time consuming and expensive to maintain. There have been some developments for weigh-in-motion systems which have made weighing trucks more convenient for truck drivers and highway workers alike. However, these weigh-in-motion systems have sensors in the pavement, making the accuracy of the weights yielded dependent on the quality of the pavement. The purpose of this project is to evaluate the change in tire dimensions of standard tires to determine loading of a vehicle. By completing this project the feasibility of using this method to alleviate weigh stations can also be determined. The results of this research can be used by the United States Department of Transportation. Moreover, the results can be used to establish weigh stations requiring only digital imaging nationwide. Project Objectives The scope of this project includes developing a relationship between tire profile dimensions and vehicle load determination. First, a method must be researched to understand tire geometry such as height to width ratios or height to contact length ratios. The next objective is to evaluate if there is a concrete relationship with a change in tire dimensions to loading on a vehicle. Methodology Used First, the tire pressure is set to design regulations. To obtain an accurate loading as a basis for analysis, the vehicle is weighed unloaded on a conventional certified automated truck scale. Next, the vehicle is loaded and weighed under loading conditions. The length of the contact patch is measured under loaded conditions. Knowing the dimensions, of the tire, a mathematical relationship is developed to determine the relationship between the dimensions of the tire and the loading on the tire. The basis of the tire geometry analysis is the equation (Gao, Lam, Prahash, Srivastan, and Stearns, 2005):

⎥⎦

⎤⎢⎣

⎡⎟⎠⎞

⎜⎝⎛−=

2cos1 αrh

h = difference in diameter of tire and vertical height of loaded tire to ground r = radius of tire

α = angle between the beginning of the contact patch to the end of the contact patch From here the length of the contact patch can be obtained by solving for α and then using basic trigonometry, such that:

⎟⎠⎞

⎜⎝⎛=

2sin2 αrc

Where, c is the length of the contact patch. By obtaining the length of the contact patch one can create a database for determining loads associated with particular contact patch lengths. Also, a mathematical relationship can be programmed into a system by using:

21

Page 44: nasa / ohio space grant consortium 2005-2006 annual student

⎟⎟⎠

⎞⎜⎜⎝

⎛=⎟⎟

⎞⎜⎜⎝

2

1

2

1

LL

cc

Once this relationship is established, one would enter a value in for c2 based on a digital image from a high-speed camera, and obtain L2. The values, c1 and L1 would be set values, such as the unloaded vehicle weight and its corresponding contact length. Results Obtained The maximum tire load for the vehicle is 4940 lbs. The outside diameter of the tire was 28.4 in. The width of the tire was 8.9 in. The tire was tested under three loading conditions assuming steer axle load on one tire is exactly half of total steer axle load.

Load vs Contact Length

0

0.5

1

1.5

2

2.5

3

3.5

4

0 200 400 600 800 1000 1200

Load (lbs)

Con

tact

Len

gth

(in)

Figure 1. Load-Contact length Relationship.

References Banasiak, David. Wyoming breaks ground for intelligent weigh station. Roads & Bridges, Jul97, Vol. 35 Issue 7, p 28. Science News, The road to intelligent weigh stations. 6/12/93, Vol. 143 Issue 24, p383. Stearns, J.; Srivatsan, T. S.; Gao, X.; Prahash, A.; Lam, P. C. Analysis of stress and strain distribution in a vehicle wheel: finite element analysis versus the experimental method. Journal of Strain Analysis for Engineering Design, 8/1/2005, Vol. 40 Issue 6, p513-523. Taninecz, George. Automated weighing without wait. Electronics, 4/25/94, Vol. 67 Issue 8, p6. Wilson, Jim; Gleaves, Kathleen. Electronic weigh stations. Popular Mechanics, Mar97, Vol. 174 Issue 3, p28.

22

Page 45: nasa / ohio space grant consortium 2005-2006 annual student

Stainless Steel Braze Mandrel Failure

Student Researcher: Michael E. Bojanowski

Advisor: Dr. James Moller

Miami University Department of Mechanical and Manufacturing Engineering

Abstract This project focuses on the study of the failure of stainless steel mandrels during the brazing of high-strength alloys. During this study, the brazing of an aircraft engine exhaust nozzle constructed from Inconel 625 will be examined. Particular attention will be paid to the behavior of the austenitic stainless steel fixture, known as a mandrel, on which the nozzles are mounted during brazing. The nature of defects commonly found in the mandrels will be explored, and several candidate causes of these defects will be proposed. These causes will be analyzed through the use of extensive mathematic modeling and experimental results. Project Objectives This project was completed in conjunction with Aeronca, Inc., a leading manufacturer of aerospace components. Aeronca uses a brazing process in constructing jet engine exhaust nozzles from Inconel 625. Brazing is a metal-joining process that uses heat and a filler material to join two pieces without melting the base materials1. Prior to being brazed, the conical exhaust nozzle is placed on a mandrel constructed from either 321 or 347 stainless steel alloys. The assembly is then placed in a vacuum furnace, where it is heated to a temperature of 1935°F following a computer-controlled ramp schedule. During the heating cycle, the mandrel and workpiece are rotated about a horizontal shaft. The furnace and part are then cooled via the introduction of argon gas. During the brazing process, the stainless steel mandrel expands a greater rate than the Inconel workpiece, and thus the outer surface of the mandrel exerts a pressure on the inner surface of the nozzle. Consequently, the dimensional accuracy of the finished product is largely dependant on the accuracy of the mandrel. Currently, the mandrels used by Aeronca are losing dimension unpredictably; thus, it is often not discovered that a mandrel is defective until the subsequent loss in dimension is noticed in the finished parts. Thus, the primary objective of this project is to examine the nature of common mandrel defects, and determine a cause for this failure. Thus, this project has two main phases: a research phase, which includes acquiring experimental data from Aeronca; and an analyses phase, which incorporates the use of computer modeling to further assess the acquired data. Methodology The braze mandrels used at Aeronca exhibit a variety of different defects. One of the most common is a loss of roundness over the life of the mandrel. This is often due to small areas of dishing on the surface of the conical shell. Other mandrels have experienced severe buckling, sometimes to the point of cracking, within the spokes that run radially outward from the center shaft to the conical shell. The nature of these defects indicated the presence of unwanted compressive stresses on the conical-shaped mandrel during the heating cycle. After conducting research into the brazing process, particularly as implemented at Aeronca, several key areas of concern were identified: the uniformity of the temperature distribution in the mandrel during brazing; the retention of strength in the mandrel compared to that of the Inconel workpiece; and the potential for the occurrence of grain growth in the mandrel during the heating cycle. In order to evaluate these candidate causes further, data was obtained both through experimentation at Aeronca and research into previously conducted experiments.

23

Page 46: nasa / ohio space grant consortium 2005-2006 annual student

A large variation in temperature amongst different regions of the mandrel could lead to varying levels of thermal expansion. This could in turn lead to the presence of internal stresses as one region tries to expand more than the surrounding material. To investigate the temperature distribution in the mandrel and workpiece during a brazing cycle, a total of twelve thermocouples were placed at various locations on the workpiece-mandrel assembly, including on the outer surface of the workpiece, the inner surface of the mandrel, and in the small gap in between the mandrel and workpiece. These thermocouples monitored the local temperature and recorded these values every thirty seconds; this data was then compiled in an Excel spreadsheet. This process was repeated for several runs to ensure the accuracy of the acquired data. The results of the experiment can be seen in Figure 1. The horizontal axis represents the control temperature of the furnace, while the three plots represent the variation of the temperature readings of three thermocouples from the desired temperature. Research was also conducted into the material properties of the stainless steel alloys used in mandrel construction, as well as those of the Inconel 625 used in the exhaust nozzle. Of particular interest are the thermal expansion coefficients of each material, as well as the materials’ retention of strength at elevated temperatures. This data can be seen in Figures 2 and 3. As can be seen in Figure 2, both 321 and 347 stainless steels have a significantly higher coefficient of thermal expansion than Inconel. Conversely, Inconel has a higher yield strength than either stainless alloy. Thus, as the stainless steel mandrel expands during heating, it comes into contact with the Inconel workpiece, whose higher strength constrains the mandrel against further expansion. If this interference is severe enough, it could lead to a compressive hoop stress within the mandrel. To investigate the presence of this compressive stress, one of the mandrel’s spokes was modeled as a fixed beam. The secant equation was used to calculate the maximum stress in the spoke as follows:

⎟⎟⎠

⎞⎜⎜⎝

⎛⎥⎦⎤

⎢⎣⎡=

EAP

rL

rec

AP

2sec

2^maxσ

Under normal conditions, the length of the spoke, indicated by L in the preceding equation, would increase as the spoke was heated. The temperature data taken from the thermocouple experiment was used to determine the unconstrained length of the spoke at a given point in the brazing cycle. The spoke was then assumed to be constrained by the Inconel workpiece, and the corresponding stress due to the prevention of thermal expansion was calculated and compared to the yield strength of the material. Results Obtained The results of the thermocouple experiment were very revealing. The temperature history of each thermocouple was graphed, along with the control temperature of the furnace. The greatest temperature differences existed between the different layers of the workpiece-mandrel assembly. The outermost surface of the workpiece, represented by the Outer Surface plot in Figure 1, most closely follows the control temperature of the furnace. However, the inner surface of the mandrel, depicted by the boldest plot in Figure 1, lags greatly during both heating and cooling. At a time of approximately 140 minutes, the temperature of the mandrel is more than 800°F below that of the furnace; thus, the concerns of uneven temperature distribution seem verified. The results from the compressive stress model are plotted in Figure 4. As can be seen, the maximum stress experienced in the spoke rises sharply with temperature, while the yield stress diminishes slightly. The maximum stress exceeds the yield stress beginning at a temperature of 112°F, much lower than the maximum temperature of 1935°F reached during the brazing process. Thus, there is significant evidence to suggest that a compressive force is present on the mandrel during brazing.

24

Page 47: nasa / ohio space grant consortium 2005-2006 annual student

Significance of Results The results of the thermocouple experiment and compressive stress model indicate several areas of concern in the current mandrel system employed at Aeronca. The temperature data suggests that different regions of the workpiece-mandrel assembly are heating more rapidly than others, a trend that could result in internal compressive and tensile stresses as one area expands at a different rate than the neighboring regions. The results of the investigation into the compressive stresses in the spokes of the mandrel are representative of the stresses that are actually present during brazing. The model assumes that the spoke is completely restrained against thermal expansion, a condition that is not present since the Inconel workpiece would expand slightly during heating. However, the results of the model do suggest that the stress experienced in the spoke will surpass the yield stress of the material, which could lead to warping and eventually fracture after repeated cycling. Figures and Charts

ACN 120 Uniformity Survey - Radial Direction Deviation From Control Temperature- Mandrel #6

-1200

-1000

-800

-600

-400

-200

0

200

400

600

800

0 50 100 150 200 250 300 350 400 450 500 550 600 650

Time from cycle start (Minutes)

Tem

pera

ture

(F)

Outer SurfaceGapInner Surface

Figure 1. Deviation of thermocouple temperature from furnace control temperature3

Stainless Steel 321, 347 and Inconel 625 Thermal Expansion Coefficient Varying with Temperature

7

8

9

10

11

12

0 500 1000 1500 2000

Temperature(F)

10-6

in. /

in. /

deg

ree

F

321347Inconel 625

Figure 2. Variation of coefficient of thermal expansion with temperature2

25

Page 48: nasa / ohio space grant consortium 2005-2006 annual student

Strength vs. Temperature

10,000

20,000

30,000

40,000

50,000

60,000

70,000

0 500 1000 1500

Temperature (F)

Yiel

d St

reng

th a

t 0.2

% o

ffset

(psi

)

Stainless 321Stainless 347Inconel 625

Figure 3. Retention of strength at elevated temperature2

Buckling Due to Constraint of Thermal Expansion

0

10000

20000

30000

40000

50000

60000

70000

80000

70 90 110 130 150 170

Temperature (F)

Stre

ss (p

si)

Max StressYield Stress

Figure 4. Maximum stress in mandrel spoke as function of temperature

Acknowledgments The author of this paper would like to thank Ms. Gretta Novean at Aeronca, Inc., for providing experimental data, as well as allowing the use of Aeronca’s facilities for testing. The author would also like to acknowledge Dr. James Moller, Miami University, for advising this project. References

1. Brazing Handbook. Fourth Edition. American Welding Society, 1991. 2. Manson, S. S. Aerospace Structural Metals Handbook. 3. Aeronca, Inc. Magellan Aerospace Corporation. <www.aeroncainc.com>

26

Page 49: nasa / ohio space grant consortium 2005-2006 annual student

High Pressure Foil-Journal Bearing Characterization

Student Researcher: Maxwell H. Briggs

Advisor: Dr. James McGuffin-Cawley

Case Western Reserve University Mechanical Engineering Department

Abstract Foil air bearings are self-acting hydrodynamic bearings which rely upon solid lubricants to reduce friction and minimize wear during sliding which occurs at start-up and shut-down when surface speeds are too low to allow the formation of a hydrodynamic air film. It has been shown that these solid lubricants perform better after extended start stop cycles at high temperature. This process is known as “breaking in” a bearing / journal. In order to show the effects of breaking in a bearing and / or journal it is necessary to measure the original operating characteristics of the journal bearing combination. These characteristics include friction coefficient, preload, preload pressure, and lift off speed. In this test, these three characteristics were measured for four journal / bearing combinations. Introduction All of this testing was performed on the drive for the Oil Free High Pressure Rig in SW-14. The drive for this particular rig is currently a Black and Decker Router motor which spins modified journals. This test was done in order to characterize two journals and two bearings in four combinations of each other and to ultimately show the effects of break in on start up and shut down rubbing friction. Procedure Measuring the friction coefficient and preload was done by turning the journal at a constant slow rotational velocity and measuring reaction torque on the bearing. The journal was tightened to the router motor and cleaned before slipping a bearing over it. The bearing was secured to a bearing sleeve which held torque and load arms. The load arm hung below the sleeve and was connected to a pulley which held the cable for the weight system; a pulley was used to minimize torque put on the bearing due to the possible misalignment of the loader. The torque arm reached above the sleeve which was connected to a horizontal load cell. By knowing the distance from the axis of rotation to the horizontal load cell a torque was measured. Below is a diagram showing the basic layout of the test configuration.

27

Page 50: nasa / ohio space grant consortium 2005-2006 annual student

By use of the load arm connected to the underside of the bearing sleeve loads were applied in one pound increments while the journal was turned. The journal was turned by hand at approximately the same speed for each load. The reason for turning by hand is because the router motor is unable to operate at a low enough voltage to drive the journal at the speed required. The rotational speed of the journal would not affect the torque measurement as long as the speed stays constant and it does not reach a speed where the bearing would lift off. Turning it by hand seemed to be the best option. At each load four measurements of torque were recorded in order to find the best average over the range of possible torques. Eleven loads were applied for zero through ten pounds. By measuring the reaction force of the bearing at a known distance from the axis of rotation a force on the bearing surface can be determined by multiplying this force by the ratio of torque arm distance to journal radius. By knowing the force at the journal/bearing surface the system can be simplified to two plates in relative motion in order to determine a friction coefficient. Graphs were made for each journal/bearing combination of applied load versus reaction load at the bearing surface. The slope of a linear fit to each graph is the coefficient of friction while the absolute value of the x-intercept is the measured preload. The preload is not a function of friction or of applied load; instead it is simply a function of the geometry of the journal and bearing. Also, by knowing the preload force it is a simple calculation to divide by the bearing foil area in order to find preload pressure. Next, lift off speed was measured for each journal/bearing combination. This experiment was done by using an infrared speed pick up as well as a load cell for recording data. Half of the journal end was sanded and painted flat black in order to operate the speed pick up correctly. Both the load cell and speed pick up were linked to a display and a chart recorder. This test was done by applying three different loads for each journal/bearing combination. The load was adjusted simply by changing the bearing sleeve to one of three made of different metals: aluminum, steel, and tungsten. For each load the speed was increased at roughly a constant rate by using a controllable voltage source which supplied current to the router motor. Speed and torque were recorded onto the chart recorder and lift off speed was determined by measuring the speed at the point where torque maximized followed by a drastic drop indicating the development a hydrodynamic film. The speed was also measured were torque spiked on the deceleration of the journal which proved to be about the same as the previous measured value due to the vertical orientation of the torque arm. These two values were averaged and plotted against the applied loads for each journal/bearing combination in order to find liftoff speed as a function of load. Results and Discussion The results of the testing are summarized below:

Friction Coefficient

Preload Force(lbs)

Preload Pressure(psi)

Liftoff Speed(krpm), P=Applied Load(lbs)

Old Journal / Old Bearing

.207 + .007 1.8 + .2 .41 + .05 .42P + .62

Old Journal / New Bearing

.130 + .002 .40 + .08 .09 + .02 .15P + .29

New Journal / Old Bearing

.19 + .02 2.3 + .5 .52 + .11 .26P + .74

New Journal / New Bearing

.186 + .004 .16 + .09 .04 + .02 .34P + .22

The numbers for friction coefficient and preload were obtained by measuring friction force (torque) as a function of applied load. Multiple measurements were taken for each data point, and a weighted average of the data was fit to a straight line. The slope of that line gave the friction coefficient, while the x-intercept gave the preload force. The error values that appear in the chart above reflect the uncertainty of the linear fit due to statistical variance of torque measurements at each applied load.

28

Page 51: nasa / ohio space grant consortium 2005-2006 annual student

The measured values for the friction coefficients resulted to be about what was expected. Three of the configurations provided similar friction to one another although the preload varied due to which bearing was being used. Theoretically the lowest friction should be the configuration with the used journal and the new bearing. This is because the new bearing came with a solid lubricant coating which included Teflon and the used journal had a smoother surface finish due to being run for the past couple of months. The used journal had been broken in by continued use but was not properly broken in by running start/stop cycles at high temperature. This combination resulted in the lowest friction coefficient by far compared to the other configurations which is what was expected. The highest value for friction theoretically should have been the combination of the new journal and old bearing. The reason for this is because the new journal was fresh from the machine shop without any break in and the old bearing lacked the solid lubricant coating. Our test showed that this combination had close to the highest friction coefficient but not the absolute most which was found to be the combination of the used journal and the used bearing. The difference of friction between these configurations implies that the old journal had developed a less smooth surface since it had been machined or the new journal had been machined to a better surface finish than the used journal. Although since the difference in friction between these two journals is so minimal it may imply that the old journal had not developed a smoother surface because it was not properly broken in at high temperature. The measured preload for each configuration seemed to vary due to the different bearings. The results show that the difference in preload between the two bearings is almost an order of magnitude. When putting the bearings onto the journals during testing the new bearing was significantly easier to fit. It seemed that the old bearing had a much tighter fit to the journals. After finding such a difference in preload between the bearings further inspection of the new bearing was performed. The new bearing was put into place while running the router motor at a low speed. The whole bearing assembly began to oscillate which at first seemed to be because of a slight run out in the journal. But with further inspection it was determined that the new bearing was fit too loosely onto the journal. After examining the top and bump foils of the bearing it was found to be damaged. The bump foil in a few spots seemed to be flattened causing the loose fit over the journal and ultimately a small preload. The testing for friction should not be completely disregarded because the friction coefficient is not at all a function of preload. The data for the preload and liftoff speed of the new bearing should be disregarded. The results for liftoff speeds at different applied loads are shown below only for the old bearing.

OldJournal/OldBearing Speed(krpm)

NewJournal/OldBearing Speed(krpm)

Aluminum(1.38lbs) 1.26 1.12 Steel(3.17lbs) 1.89 1.54 Tungsten(7.89lbs) 3.99 2.80 The liftoff speed for the old journal was found to be higher than that of the new journal. This makes sense assuming the surface finish on the new journal is rougher than that of the old journal which was shown by the difference in friction between the two journals while testing with the new bearing. Imperfections in the finish may cause air to be forced between the journal and the foil causing the hydrodynamic film to form earlier. Conclusion Unfortunately, high temperature start/stop cycle break in tests have not yet been performed on the new journal. These tests are expected to start next week and will hopefully shed light on whether or not properly breaking in journals does in fact have an impact on start up/shut down friction and liftoff speed. From the data collected thus far it has been shown that running a journal without proper break in does not drastically affect the start up/shut down friction. Also, it has been shown that the liftoff speed of journals possibly may correlate to surface finish and wear.

29

Page 52: nasa / ohio space grant consortium 2005-2006 annual student

CFD Analysis of the S809 Wind Turbine Airfoil

Student Researcher: Matthew A. Castellucci

Advisor: Dr. Jed E. Marquart, P. E.

Ohio Northern University Mechanical Engineering Department

Abstract An investigation was conducted into the methods and accuracy of predicting the flow field and aerodynamic characteristics of a specific horizontal-axis wind turbine airfoil using a commercially available, general purpose CFD code. The purpose of this work is to gain experience and an understanding of the methods used to accurately model and compute the aerodynamic characteristics of a wind turbine specific airfoil. Calculated coefficient of lift and coefficient of drag of the NREL S809 airfoil were compared to published calculations and wind tunnel data. Gambit was used to generate the grid and FLUENT was used for solving and postprocessing. Introduction Modeling and analysis techniques for evaluation of lift and drag coefficients of a common wind turbine specific airfoil were investigated. Results were compared to published wind tunnel data and CFD analysis. Modeling / Boundary Conditions: Coordinates for the NREL S809 horizontal axis wind turbine airfoil were obtained and the airfoil shape modeled in two dimensions using Gambit. The airfoil was modeled at zero angle of attack. The square grid extends +/-10 chord lengths in each direction and consists of approximately 24500 triangular cells. There were roughly 200 cells surrounding the airfoil surface. Wall boundaries comprise the airfoil, upper and lower (horizontal) boundary surfaces. The inlet is a velocity inlet while the outlet is a simple outflow. The grid developed is shown in Figure 1.

Figure 1. Computational grid used.

After creating the mesh and boundary types in Gambit the model was read into Fluent. The grid was scaled to achieve a 600mm chord length. Flow was simulated at a Reynolds number of 2x106 which corresponds to an inlet velocity of 50m/s. Calculations for fully turbulent flow were performed using the k-ε turbulence model.

30

Page 53: nasa / ohio space grant consortium 2005-2006 annual student

Solution / Results No convergence problems were encountered. In order to calculate the lift and drag coefficients the forces due to pressure were output in the corresponding directions. Figures 2 and 3 show the computed pressure contours and velocity vectors.

Figure 2. Computed pressure distribution.

Figure 3. Computed velocity vectors. Table 1 lists the results of this study, compared to the results listed in reference 1. Percent errors are calculated between values calculated in this study and both the calculated and experimental values from reference 1. Values from reference 1 are taken as ‘actual’ in percent error calculations.

31

Page 54: nasa / ohio space grant consortium 2005-2006 annual student

Table 1. Results

% ErrorCalculated 0.1547 -Ref. 1, calculated 0.1324 16.8429Ref. 1, experimental 0.1469 5.309735

% ErrorCalculated 0.0131 -Ref. 1, calculated 0.0108 21.2963Ref. 1, experimental 0.007 87.14286

Coefficient of Lift

Coefficient of Drag

References 1. ‘CFD Calculations of S809 Aerodynamic Characteristics’. Stuart S. Ochs and Walter P. Wolfe. 3

October 1996. 2. ‘Design and Experimental Results for the S809 Airfoil’. Dan M. Somers. January 1997.

32

Page 55: nasa / ohio space grant consortium 2005-2006 annual student

Microprocessors for Robotics Application

Student Researcher: Sheriff Y. Ceesay

Advisor: Dr. Deok Nam

Wilberforce University Engineering and Computer Science Division

Abstract Objective: Construct a Microprocessor using B2Spice program and program a microcontroller to perform a certain task of a robotics control. In the end is to help create a better understanding of how microcontrollers functions. Project Description For the project we’ll first start out learning more on how a basic microcontroller works by doing a literature search. The microcontroller will have several inputs and outputs for receiving decoding and encoding instructions sent to it through a CPU. The CPU creates a low level language depending on the type of microprocessor being used. Initially, we’ll create a simple microprocessor using B2Spice to grasp an idea of what happens when we’re creating instructions for the microcontroller. After constructing a microcontroller on B2Spice we’ll create simple codes to test out on the B2Spice software before we begin programming a microcontroller. Later, different programs will be used for simplicity in microcontroller programming. The simpler we keep the program for the microcontroller the better the construction and use of a microcontroller is understood. In the end, we want to create many ways to use a microcontroller apart from the use of computers and most common house hold appliances. Find ways to use microcontrollers to create a form of artificial intelligence to help make safe dangerous jobs in most countries. Literature and Theoretical Review Microprocessor for Robotics The use of microprocessors has been around for several decades now, since then processors have been mainly focused on increasing processing speed. The first series of processors were first introduced around the mid 1970s, mainly by Intel Company. Early on in the beginning and still today microprocessors are just small arrays of transistors aligned in a complex matter to create a form of memory. Inside microprocessors there are several different things that a microprocessor must have for it to be known as a processor. First, it has to have an Arithmetic Logic Unit, which is a part of the processor that performs mathematical operations. A microprocessor is capable of moving data from one location to another. A microprocessor can make decision to jump to a new set of instructions based on those decisions. The use of microprocessors is implemented in many ways. One way it can be applied is through the use of robotic applications or control systems. However, with use of control systems more things are involved. Although the microprocessor is still the main part of a control system, the one thing that changes is that it becomes a microcontroller. We use microcontrollers to control things within the physical world compared to having to use a computer to change things within a digital world. The use of microcontrollers will help gain better control of devices we use everyday to help make life simpler. We can use microcontrollers to help detect types of obstacles in a car’s path when a driver in moving vehicle in reverse, or control behavior of lighting a room to help save electrical power.

33

Page 56: nasa / ohio space grant consortium 2005-2006 annual student

Analysis and Report - Progress So far within my research, I have found many alternative ways of using microcontrollers. My initial idea was to use a microcontroller to control the watering of a plant so to help people on vacation better manage their indoor plants while away. However, the microcontroller that I’ve planned to use is a BS2 type microcontroller from parallax, which allows the user to use slightly more complex instructions. The BS2 microcontroller can be use further to include sensors to help it interact with its surroundings. For the current stage of the project understanding the programming format of the microcontroller is at hand. Further aspects of the project include trying out more ways we can use the bs2 parallax microcontroller to help control many other devices people are likely to use every day. In a sense of robotic control, more sensors are needed in order to create more awareness for a microcontroller. Since a robot needs to interact with it surroundings in order to know what obstacles to avoid based on its design. With use of the BS2, we can include more sensors based on its design to allow up 16 inputs onto the board. The number of inputs for sensor is limited since output is needed for controlling other devices, for example, servos and motors to move a robot.

Conclusion With a controller there are many uses that can be implemented onto a physical world. We can use microcontrollers to create forms of artificial intelligence, save electrical power, create safety systems. With control systems the only limitation is our imagination. We can obtain many ways to use control systems towards anything. In this case, improving artificial intelligence. Using the BS2 microcontroller we cannot achieve very much due to its limited capacity of execution 50 lines of code, but we can create bigger and better microcontrollers that allows us to execute thousands of codes within a microcontroller to control many devices. References Howstuffworks.com

http://computer.howstuffworks.com/microprocessor2.htm http://computer.howstuffworks.com/microprocessor3.htm

Parallax

http://parallax.com Basic Stamp

http://www.tjhsst.edu/~jleaf/tec/stamp/sttutor1.htm

34

Page 57: nasa / ohio space grant consortium 2005-2006 annual student

Space – Exploring the Planets

Student Researcher: Lauren A. Clark

Advisor: Dr. Paul C. Lam

The University of Akron College of Education

Abstract Activity 1: Each day the class will begin by viewing the NASA image of the day, viewed on the NASA website’s Multimedia Section. Activity 2: The students will be assigned a project that will continue for approximately two weeks. The students will work individually, but will be allowed to consult with each other. The students will be given work time in class, including time in the school’s computer lab and media center. The students will be asked to design a travel brochure or poster or advertisement on one of the planets or stars in our solar system. They must thoroughly research the planet or star and include as much information and images as they can find. They must include things such as distance from Earth, if it has moons, how it formed, what makes it unique, if it is part of a constellation, etc. They must also research the affects that traveling to this planet or star might have on the human body – physiology and site this as a disclaimer. The students will be introduced to NASA’s website and will be shown how to navigate through the resources offered. Other Activities that will go along with this unit: (resources for lesson taken from Exploring Meteorite Mysteries) “Building Blocks of Planets-CRUNCH! Accretion of Chondrules and Chondrites” The students will observe the process of planet formation and record what they saw and answer questions. “Historical Meteorite Falls” Students will read reactions from people that experienced the selected meteorite falls and answer questions. The students will be required to do additional research on their group’s meteorite, including locating it on the map. Subject: Physical Science Grade Level: Middle School – 7-9 Time Allocation: Approximately 2 weeks to complete unit Lesson Objectives The students will be exposed to many aspects of space and space travel via the NASA picture of the day. The students will use their inquiry skills to research their assigned planet or star. The students will understand the physiological implications of space travel. The students will recreate the process of forming planets. The students will gain valuable knowledge about the aspects of space that could affect them as humans.

35

Page 58: nasa / ohio space grant consortium 2005-2006 annual student

Methodology Used The students will learn about space in many different ways. The visual learners in the class will benefit greatly from the daily discussion of the picture of the day. Seeing the amazing photographs and paintings will be a very effective way to introduce current topics in space exploration. This activity will open the class each day and will hopefully get the students thinking about space and will generate questions and discussion among the students. The students will be given the ability to work independently to research their planets or stars. They will have to decide on their own what information they would like to discover and share with the class. The students will work together to complete the additional activities in class. They will complete the hands on activity and will answer critical thinking questions in CRUNCH! Accretion of Chondrules and Chondrites. Students will also gain a historical perspective on meteorites in Exploring Meteorite Mysteries – Historical Meteorite Falls. The students will also gain a human perspective in this lesson. The students will be responsible for new vocabulary words throughout all of the lessons. Ohio State Standards covered in this unit: Doing Scientific Inquiry: Grade 7 #3. Formulate and identify questions to guide scientific investigations that connect to science concepts and can be answered through scientific investigations. #7. Use graphs, tables and charts to study physical phenomena and infer mathematical relationships between variables (e.g., speed and density). Grade 8: #3. Read, construct and interpret data in various forms produced by self and others in both written and oral form (e.g., tables, charts, maps, graphs, diagrams and symbols). Grade 9: #5. Develop oral and written presentations using clear language, accurate data, appropriate graphs, tables, maps and available technology. #6. Draw logical conclusions based on scientific knowledge and evidence from investigations. Scientific Ways of Knowing: Grade 9: #2. Illustrate that the methods and procedures used to obtain evidence must be clearly reported to enhance opportunities for further investigations. Earth and Space Sciences: Grade 8: The Universe # 1. Describe how objects in the solar system are in regular and predictable motions that explain such phenomena as days, years, seasons, eclipses, tides and moon cycles. #2. Explain that gravitational force is the dominant force determining motions in the solar system and in particular keeps the planets in orbit around the sun. # 3. Compare the orbits and composition of comets and asteroids with that of Earth. #4. Describe the effect that asteroids or meteoroids have when moving through space and sometimes entering planetary atmospheres (e.g., meteor-"shooting star" and meteorite). #5. Explain that the universe consists of billions of galaxies that are classified by shape. #6. Explain interstellar distances are measured in light years (e.g., the nearest star beyond the sun is 4.3 light years away). #7. Examine the life cycle of a star and predict the next likely stage of a star. #8. Name and describe tools used to study the universe (e.g., telescopes, probes, satellites and spacecraft).

36

Page 59: nasa / ohio space grant consortium 2005-2006 annual student

Grade 9: The Universe #1. Describe that stars produce energy from nuclear reactions and that processes in stars have led to the formation of all elements beyond hydrogen and helium. #2. Describe the current scientific evidence that supports the theory of the explosive expansion of the universe, the Big Bang, over 10 billion years ago. #3. Explain that gravitational forces govern the characteristics and movement patterns of the planets, comets and asteroids in the solar system. Procedure 1. The students will be introduced to NASA’s website and all of the tools available through the site. 2. The students will view the NASA picture of the day at the beginning of every class period. The

students will discuss the picture and be offered extra credit if they would like to research the topic any further.

3. The students will be introduced to space and readings will be assigned from their textbook. 4. The students will be assigned the Brochure or Poster project during their second class on space. 5. The students will be given time during class over the next two weeks to work on their projects. 6. Meanwhile the students will complete the two supplementary activities from NASA along with

several other activities 7. The students will present their brochures or posters on planets/stars with their classmates. 8. Their projects will be graded. Criteria for Grading the Project a. Information is presented in a clear and understandable fashion b. Credible sources were used c. The brochure is interesting and presented clearly d. The students is prepared and organized References 1. NASA Homepage: http://www.nasa.gov/home/index.html 2. For the picture of the day go to: http://www.nasa.gov/multimedia/highlights/index.html 3. Exploring Meteorite Mysteries: A Teacher’s Guide with Activities for Earth and Space Sciences,

August 1997, Lesson 10 and Les son 15.

37

Page 60: nasa / ohio space grant consortium 2005-2006 annual student

Moving Towards Lean in the Emergency Department of the University Hospital East

Student Researcher: Denia R. Coatney

The Ohio State University Department of Industrial and Systems Engineering

Abstract The concept of Lean Manufacturing has penetrated the Healthcare Industry by storm. The Lean principles have effectively helped healthcare services improve quality, reduced costs and service patients faster. Many hospitals all over are utilizing this principle and are reaping the benefits of decreased downtimes for patients, reduced waiting time for lab results, reduction in processing times, and in so many other areas. Lean principles when implemented in healthcare are proven to minimize wastes and maximize value. Specifically, at the OSU Hospital East where Lean was implemented for this project the hospital was able to see a reduction in the time it takes for patients to go to their assigned rooms, a reduction in the time it takes for patients to see physicians, and a reduction in the total length of stay of patients. Additionally, the hospital noticed an increase in the number of their admitted patients, and an increase in the number of their discharged patients. It is important to note that all the minimizing of waste was done without adding, or removing resources or increasing costs of operation. Introduction and Objectives Lean is able to efficiently work in any area be it healthcare, or manufacturing, etc., as long as there are defined activities operating to produce a specified product or service. It is simple to implement Lean in healthcare once all wastes associated with the operation are clearly defined. Waste can be clearly defined by drawing value stream maps of activities and procedures. Value stream maps give a visual picture of flow, and they help to pinpoint areas where waste are occurring. Once the wastes were recognized by the hospital, the staff was then convinced that there was an opportunity for improvement. Examples of waste detected in the OSU emergency department were waste of time of nurses to service and process patients, waste of doctors observing and service of patients, and waste of times patients spent idle waiting for release, test results, or being seen by nurse or doctor. There were also wastes observed in the flow of patients and services, and waste could be detected in work imbalance and variations. The whole goal of Lean in this project was to increase the satisfaction of the customer and in this case, the patients. To improve service to the patients of OSU emergency department, waste were identified and analyzed. After analyzing the waste, a plan and objectives were determined to see how those wastes could be eliminated or reduced. The objectives were as follows: minimization of patient’s length of stay (LOS), maximization in patient throughput, minimization of patient and emergency department staff (nurses and physicians) total travel distance, and minimization of the waiting times at different stages in the patient’s flow process path. The result of increases in these areas increased the overall patient satisfaction. To achieve the objectives mentioned earlier there had to be a logical categorization of the patients that were treated at the OSU emergency department, a new and improved cellular layout of the facility had to be executed, Lean strategies for enhancement of the emergency department had to be put into place within the current system, administration of suggested solutions had to be incorporated within the current system, data collection before and after the implementation of Lean solutions had to be noted, and measuring the improvements in terms of numbers was vital to show the emergency department how these changes have aided in the development of a new and improved system for the OSU emergency department. Methodology Executing Lean at the OSU emergency department allowed the hospital to focus more on patient value added procedures. As a result they are now able to do a whole lot more with less. For instance, more patients can now be assisted, the same space can be utilized but now it can be used more efficiently, and the costs of servicing patients were reduced. To achieve the objectives it is important to show how each solution was achieved individually. The first solution was logical categorization of patients. In order to logically categorize patients an emergency department scorecard was designed to capture the percentage

38

Page 61: nasa / ohio space grant consortium 2005-2006 annual student

of patients admitted, left without being seen (lwbs) discharged, expired, or transferred. The observations that were made for this scorecard was that patients leaving out of emergency department are either admitted to the hospital floor, sent home, or either lwbs, expired, or transferred. The cellular layout of the emergency department had to be reorganized in order to better serve the patient. Rooms were assigned a color according to the patient acuity. Acuity ranges are as follows: low acuity and low cycle time (LALC), low acuity and high cycle time (LAHC), high acuity and low cycle time (HALC), and high acuity and high cycle time (HAHC). Patients with HAHC are assigned to rooms 1-9 with the color red for their associated acuity level (5), because these rooms are in closer proximity to the emergency medical service entrance and central nurse station, also these rooms are close by soil units and labs (rooms 1and 2 are specifically designed for trauma patients). Rooms 10-16 were labeled a green color and assigned to HALC patients with an acuity level 4. Patients in these rooms were chosen for these patients because the length of stay for this group is normally small, so they were centrally located to nurse station and close to lab testing room. Patients on the Fast Track (low acuity, so they are able to be seen quickly and released with shorter cycle times) were assigned to a color blue and were assigned to rooms 17-20. Patients in this group were assigned to these rooms because patients in this category have a high turnover of service because patient’s acuities are low. These five rooms are right in front of the emergency department entrance so it is easy to assign patients and there is less travel time for patients. Finally LAHC patients with acuity level 3 were assigned to rooms 21-25 with a yellow color. These patients were assigned these rooms because they are close to the central station and nurses are able to give them attention without having to travel long distances. Each room was assigned a kanban card with the room number written on it. The cards for vacant rooms were kept in the triage/registration area where the patient checked in. When a patient is assigned a room, they are given a colored kanban card that is associated with their acuity and cellular layout (color coded room). This kanban card follows the patient throughout their process and is given to registration upon discharge. This kanban card serves as an indicator for triage that rooms are empty or occupied and that patients are being served or discharged. This kanban card also gives triage a cue to call charge nurse to verify vacancy of rooms and to warn patients of a wait due to occupancy. This process helps to keep patients and staff informed on what is occurring within the emergency department. A visual management system was also arranged to help nurses make a distinction between acuity of patients and appropriate room assignments based on the new cellular layout (Figure 1). The visual management system (VMS) for the future state helps to indicate the category of patient being treated in assigned room. Color magnets of red, blue, yellow, and green are appropriately assigned to visual management board according to cellular layout. The VMS helps to improve nurse to patient ration dynamically based on category and number of patients in emergency department. The concept of the cellular layout assignment of rooms is to position similar treated patients in the same area. Depending on the number of patients in selected categories, the charge nurse is authorized to change room assignments and create virtual cells by reallocating colors for assigned rooms.

Figure 1. Cellular Layout of OSU Emergency Department.

39

Page 62: nasa / ohio space grant consortium 2005-2006 annual student

There were several Lean strategies capable of being applied to the OSU emergency department to improve their current system. However, the Lean strategies that were considered for the emergency department were: triage bypass, phone triage lines, bedside registration, charting scribe, preformatted charting, and electronic health records. The two strategies that have been implemented in the current system to date are the triage bypass and the bedside registration. Data was collected on three different dates to see if the improvements made promoted better service to patients. The information collected was patient id, arrival time, triage start time, time to room, time to primary nurse, number of nurse visits, disposition decision time, actual disposition time, acuity level, and length of stay. Results and Discussion The following table shows data analysis numbers from the three dates observed for the Lean strategy analysis. These dates were selected only for sampling purposes to verify the effect of Lean on the emergency department. Because of proprietary reasons all information cannot be disclosed in this report. Improvements are still being made at OSU hospital but that information is disclosed to that facility.

Table 1. Lean Implementation Improvements Analyzed.

Before Lean After Lean Change (Difference) Time to Room 1 hour 24 min 16 minutes 80.95%

Time to Physician 2 hours 28 min 32 minutes 78.38% Length of Stay 4 hours 19 min 2 hours 16 min 47.50%

The time to room was calculated by taking the difference between the arrival time and the time at which the patient is placed in an assigned bed. The time to physician was determined by calculating the difference between arrival time and the time at which the emergency department physician checks the patient. The length of stay was determined by calculating the patient total time in emergency department from arrival (check-in) to disposition (released). As seen above the Lean strategies adopted for the time to room, time to physician and length of stay decreased significantly 80.95,78.38 and 47.50 percent respectively for the different areas. As a result, patients are now happier with service and are now able to be treated earlier and more proficiently. A hypothesis test was set up to see if there was a significant difference between disposition decision time and actual disposition time. There was a paired t-test conducted on the three days analyzed previously. The hypothesis proved that there is a significant difference between the two times. In the figure below it shows the travel distances saved by implementing the new cellular layout and visual management system. All units are in feet and the test labs and soil units were not considered in this travel distance because those distances depend on the number of times patients travel to those areas. The travel distances assumed here is all the distances a patient travels from registration (check-in) up until they are released from hospital. Other distances that were considered were the distances both the nurse and the physician travel. Similar to the patient, the distance traveled by the nurse and physician was reduced by 14,668 and 10,808 feet respectively. Based on observation, it is assumed that the number of trips to High Cycle Time patients is 8 for nurses and 4 for physicians. Also, number of trips to Low Cycle Time patients is 4 for nurses and 2 for physician.

40

Page 63: nasa / ohio space grant consortium 2005-2006 annual student

020

406080

100

120140

160180

HAHC HALC LAHC LALC

Figure 2. Comparison of patients’ traveling distances.

Total Traveling Distance Saved due to Cellular arrangement of ED = 323 feet

There was a significant savings seen in length of stay following the implementation of Lean, and other suggested Lean strategies. Other Lean strategies can be adopted in the future to expand the development of the OSU emergency department. Recommendations for future study would include: developing a simulation model in arena so that percentage of utilization of nurses, physicians and labs can be obtained, PFASTsoftware could be used to route patient travel in volume and PFAST can generate an optimum layout for the emergency department, and finally nurse availability can be obtained by analyzing nurse data in terms of 0-1 matrix, and this process will help to generate scheduling of nurses for different shifts within emergency department. Acknowledgments I would like to give a special thanks to the ISE 533 Team for all your help and hard work in researching of the OSU emergency department. Thanks also to OSGC for this wonderful opportunity to continue my education, and to Dr. Gerald T. Noel, Sr., Central State University, Dr. Kenneth J. De Witt, and Ms. Laura Stacko for making this opportunity possible. Lastly, but certainly not least, special thanks to The Ohio State University Industrial & Systems Engineering Department for all your help, encouragement, and support throughout this effort. References Clinical Initiative Center The Clockwork ED - Expediting Time to Physician, Volume I, Washington D. C., 1999. Going Lean in Health Care, Innovation Series 2004. Graff L, et al. “Emergency Physician Workload: A Time Study.” Annals of Emergency Medicine, July, 1993: 1156-1163. Hollingsworth J. et al. “How do Physicians and Nurses Spend Their Time in the ED?” Annals of Emergency Medicine, January, 1998; 87-91. Optimizing Patient Flow: Moving Patients Smoothly through Acute Care Settings, Innovation Series, 2004. http://medicalcenter.osu.edu/ http://www.elmr-electrocnic-medical-records-emr.com/ http://www.Lean.org

http://www.ihi.org

•Transforming Care at the Bedside (TCAB), Innovation Series 2004. •Shafer, S. M., Meredith, J. R. Operations Management, John Wiley & Sons.

Patients’ Current Traveling Distance (longer bars)

Patients’ Future Traveling Distance (short bars)

41

Page 64: nasa / ohio space grant consortium 2005-2006 annual student

Ceramic-Polymer Composite Bone Substitute Testing

Student Research Team: Ryan J. Cooper, Jared Simon, Abdulla Al-Mahmoud

Graduate Student Assistant: Avinash Baji

Advisor: Dr. Josh Wong

The University of Akron Department of Mechanical Engineering

Abstract In the United States, surgeons perform more than 300,000 knee and hip replacements each year- mostly for people over the age of 65. The annual number of hip fractures is expected to be nearly 500,000 by the year 2040 because the “baby boomer” generation will be reaching age 65 by then. Thus, the demand for bone substitute research is on the rise. In the past, researchers attempted to make such substitutes from sea coral. These bones were oftentimes found to be too brittle, however. One simple bone substitute composite that is currently used is created using a ceramic called Hydroxyapatite and simple metal pins. The Hydroxyapatite has the same chemical structure as real bone, thus, it responds similarly in laboratory tests. It oftentimes needs metal pins, however to hold it in place. According to Dr. Frank Schowengerdt, director of the Center for Commercial Applications of Combustion in Space, the problem of the current implants that contain some metal is that they do not necessarily promote natural bone growth. Thus, as natural bone begins to grow in again, there oftentimes needs to be a second surgery to remove any metal pins that were used to repair the bone. This means that patients oftentimes require more than one surgery. Bone substitutes can be generalized into two categories- ceramic substitutes and polymer substitutes. Ceramic bone substitutes oftentimes are made of calcium phosphate, calcium sulfate, bio glass, Hydroxyapatite or any combination of these. Polymer based substitutes include both biodegradable and non-biodegradable polymers, and they either are used alone or with other materials. Examples of polymer based bone substitutes are Cortoss, OPLA, and Immix. Project Objectives The focus of our design project is to test the interfacial bond strength between a biodegradable polymer, Polycaprolartone (PCL), and a biodegradable ceramic powder, Hydroxyapatite (HAP). Such a composite is currently being looked at in the biomedical field as being a potential bone substitute. This type of composite material simulates an actual bone structure by gaining flexibility from its polymer component and rigidity from its ceramic component- both of which will degrade over time when put into the human body. The intent would be to have this composite material degrade (and be carried out of the body through its natural systems) at the same rate that real bone is generated in a person. Thus, over time, the bone substitute would be steadily replaced by real bone. This will eliminate the need to have a second surgery to remove the traditional metal pins that are commonly used when repairing bone structures. As mentioned, the scope of this project is to design and implement a method to determine the interfacial bond strength of this polymer-ceramic composite. Methodology Used Given our goal to test the interfacial bonding strength between the polymer and the ceramic powder, we had to overcome a number of challenges. First, we had to determine exactly how we would create this composite material. After that, the largest challenges were all related to being able to measure only the interfacial bonding strength of our composite without measuring the force associated with any plastic deformation of the polymer pieces or measuring any frictional losses in the system. We had at our disposal an Instron testing machine, which measures applied forces in tension and compression only.

42

Page 65: nasa / ohio space grant consortium 2005-2006 annual student

Given the availability of 1.5mm and 3mm molds, we decided that we could make several 1.5mm squares of PCL. Our intent was to use these as the “bread” of our composite sandwich, with the HAP powder being the “meat” in the middle. In other words, we would lay one 1.5mm PCL square in the 3mm mold, then place a sheet of Teflon approximately 2 inches down from the top side of the square. The remaining portions of the square we would cover with as thin of a layer of starch powder as possible. The next step would be to place another 1.5mm HDPE square on top of the power in order to create a PCL-HAP-PCL sandwich which would be approximately the thickness of the 3mm mold. This sandwich would then be compression molded into one composite substance. The Teflon strip at the top of the square would give us separation in the polymer such that we had two polymer “arms” by which we could pull this composite apart at the interface. Our design team decided that the best way to minimize the plastic deformation of the polymer would be to mount two relatively large diameter pulleys on aluminum braces, and then run an inextensible wire from the tops of each pulley to form a loop that would fit nicely in the pinning mechanism on the Instron. We also decided that if we needed to test different thicknesses of test strips that it would be advantageous to make one of the pulleys adjustable laterally so that it could be slid farther or closer to allow for the different test strip sizes. Thus, we mounted one pulley at an end of the aluminum cross braces, and then we cut a slot completely through the other side of the cross braces such that the pulley’s lateral position could be adjusted simply by loosening two bolts, sliding the pulley, and the retightening the bolts in the new position. Our test samples could now be rolled up and around the pulleys as they were pulled apart, as opposed to separating at 90 degree angles if there were no pulleys. Forcing the polymers to conform to this more gradual bending would ultimately minimize the plastic deformation associated with this test. We also decided to minimize the frictional losses in the system by mounting the pulley wheels to their axles using ring bearings. To setup the test, we would first start one of the polymer “arms” around each pulley. Next, a counterweight would be hung from the bottom of each sample to force it to conform to the curvature of each roller, rather than separating at 90 degree angles. Finally, the two separated “arms” of the sample would have to be anchored while the test apparatus is pulled up by the Instron, so that the samples actually peel apart at the point where they were tangent to the pulleys. Thus, this design effectively minimizes frictional and deformational losses while conducting the peel test. Due to the large cost of the actual polymer and the ceramic powder that we must use in order to run our test, we decided to run tests with other materials to ensure that our testing apparatus and procedure would work as we had planned. Thus, we concluded that we would run tests with readily available high-density polyethylene (HDPE) to simulate our polymer and we decided to use a starch powder (cooking flour) to simulate our ceramic powder. We also created some test models out of HDPE with superglue at the interface, as well as some models made out of duct tape where we placed the sticky side of the tape on itself to create an adhesive interface. Results Obtained We found from our test models that the starch powder did not bond at all with HDPE. We did obtain significant results from our HDPE-superglue composites and our duct-tape test models, however. These test models yielded typical stress-strain curves that gave us confidence that our testing apparatus and procedure were sufficient to test the interfacial bond strength of a composite material. With these results in hand, we drove on with creating our PCL-HAP-PCL composites. Through a number of different tests while slightly varying the method of creating this composite, we found that HAP does not bond well with PCL.

43

Page 66: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results Although our results showed that there is little to no bonding between PCL and HAP, it is still significant that we designed a testing apparatus and procedure that minimizes the energy lost during testing due to non-conservative forces as well as plastic deformation. This procedure can be applied in the future to conduct further testing on composite bone substitutes. Figures and Charts (See Poster board) References 1 Better Bone Implants, 30 Oct 2002, available at http://science.nasa.gov/headlines/y2002/30oct_hipscience.htm. 2 Bone Graft Substitute Materials, 15 March 2005, available at http://www.emedicine.com/orthoped/topic611.htm

44

Page 67: nasa / ohio space grant consortium 2005-2006 annual student

High Altitude Balloon Fight Path Prediction

Student Researcher: Michael W. Corbett

Advisor: Dr. Mitch Wolff

Wright State University Department of Mechanical and Materials Engineering

Abstract High altitude ballooning provides unique research opportunities that are otherwise difficult to achieve. The “near space” altitudes allow research to be conducted in low pressure, low temperature, high solar radiation environments. Unfortunately, the process of ballooning is not perfect and the recovery of the payload is not guaranteed. For this reason, flight path prediction is of the utmost importance. A freely available program called BalloonTrack1 can be used to run predictions, but it lacks some key features. This project involves the development of a new model that uses interpolation to improve the prediction capabilities for cases of incomplete data or multiple reporting locations. In addition, batch prediction is possible using different launch locations, ascent rates, and wind data files. Google Earth2 is used to plot launch and landing locations on a single map. Relevant flight information for each prediction is included in a text output file and in Google Earth. The prediction accuracy has been found to be at least as accurate as BalloonTrack for the limited number of simulated flights for which the actual flight information was known for comparison. Project Objectives The main objective of this project is to improve the prediction capabilities for high altitude balloon flights by developing a flight path prediction program that uses a different solution algorithm and has features that BalloonTrack lacks. These features include automatic wind data retrieval, list selection or manual entry for launch location selection, interpolation of wind data, batch processing, and multi-point mapping. Standard, text-based output is also included. The code is written and documented so that additional features can be implemented at a later time and additional reporting locations and launch sites can be added easily. Methodology The goals of the project were accomplished by starting from a simple script written by David Snyder3 for use by his NASA Glenn Explorer Post in predicting the flight path of high altitude balloons. This C code was updated to C++ code and was the basis for the entire program. The first version of new program simply eliminated time processing errors and data read errors. This version required that the wind data be downloaded manually and that the filename be entered into the program. The list of launch locations was very limited and the inputs were limited to the burst altitude and the average ascent rate. One of the major drawbacks to using BalloonTrack was that the wind data was not automatically retrieved. The best database of the National Weather Service Upper Air Sounding data was the University of Wyoming, Department of Atmospheric Sciences4. While the website was not difficult to use, it could be a time consuming process to download the data, load it, and then run the prediction. Version 2 of the new program added automatic retrieval of the wind data. This was accomplished by using the UNIX command wget, compiled as a Windows executable5. This script was called from within the main program and control was returned to the program after execution. The wget command is a non-interactive, command-line tool for file retrieval using Internet protocols such as HTTP (HyperText Transfer Protocol). The prediction program prompted the user for the date, time and reporting location for the wind data and then used wget to retrieve the data. The data was then formatted, resaved and used for the prediction. Version 2 did not improve the prediction capabilities. It only increased the speed for running a single prediction by automating wind data retrieval.

45

Page 68: nasa / ohio space grant consortium 2005-2006 annual student

Version 2.5 of the program added one of the more aesthetically pleasing features. Both the landing location and launch location were plotted on a map in Google Earth. This was done by generating a .kml file with the appropriate syntax from the main program and opening it automatically. Google Earth uses a subset of the XML (Extensible Markup Language) programming language to script points and overlays for its maps and satellite images. In this version of the prediction program, there was little flight information included within Google Earth and the locations merely had “pins” making their places. Version 2.5 also updated the accuracy of predictions by tweaking some parameters in the algorithm. The main accuracy improvement was in dealing with holes in the data and how to interpolate the wind speed and direction for those cases. Wind data files that were incomplete (no data above a certain altitude as opposed to just a few missing points in the middle) still provided very inaccurate results. Version 3 fully implemented the use of Google Earth into the prediction program. Relevant information such as bearing and distance from launch location, flight time, and the wind data and reporting location used in each prediction were included. Version 3 also included a much more complete list of launch locations and wind data reporting locations. The user was also given the option of entering a launch location manually with the latitude, longitude, and altitude values. The key addition in this version was the ability to do multiple predictions. After a prediction was run, the program prompted the user for another prediction. This could be from the same or different launch location, using the same or different wind data reporting location on the same or different day (and time). All predictions run in one execution of the main program were put in one text output file and in .kml Google Earth script file. It should also be noted that the wind data files were automatically saved with appropriate file names after being downloaded and processed. This allowed BalloonTrack to use the same data files without needing to retrieve the files manually. The final version of the prediction program, version 4, added another key feature to balloon flight path prediction by allowing the user to select an interpolated wind data reporting location. This was done by combining three wind data files from standard reporting locations. The wind data was averaged into a new data file based on the straight-line proximity to the launch location. One degree of latitude is equal to a constant 69.172 miles (within a very small amount of error) for the entire range (90ºS to 90ºN) since lines of latitude are parallel and equally spaced. The lines of longitude, however, are connected at the earth’s poles and are not parallel. The distance of one degree of longitude is a function of the latitude (distance away from the equator) and is given by the following equation: current_lon_dist=lon_dist_at_equator*cos(current longitude)6. For central Ohio, the latitude is approximately 40º and therefore the distance of one degree of longitude is: current_lon_dist=(69.172mi)*cos(40º)=52.999 miles. Using this conversion and the Pythagorean Theorem, the straight-line distances between the launch location and each reporting location were calculated. These distances (relative to each other) were used in weighting the data from each reporting location based on proximity to launch location. Since the wind data is not recorded at a fixed altitude interval (not even consistent day to day at the same reporting location), an interpolation step size was needed. This step size determined the altitude range over which data would be averaged into a single new point. By adjusting this step size, the prediction could change drastically. A program default was implemented but control over this variable was given to the user. A coincident advantage of using this interpolation technique was that incomplete data files did not cause as many problems since a single complete data file could fill in the upper air data alone. In programming, the code was laid out in a way that would facilitate expansion and revision. Adding additional launch locations and wind data reporting locations would be simple because vectors were used in the code. Adding new locations would be as simple as copying and pasting of a few lines of code, making the changes for the new information, and rebuilding the code. Adding new features to the code should be relatively simple as well. The code was laid out in a fairly linear fashion but took advantage of function calls to subdivide the code into specific tasks. There is a significant amount of in-line commenting and variable names are generally quite indicative of the information they contain. These programming techniques facilitate further development of the program.

46

Page 69: nasa / ohio space grant consortium 2005-2006 annual student

Results Obtained The outcome of this project is a working program that predicts the flight path of high altitude balloons with a respectable amount of accuracy. The objectives of the project have been met and the features are all combined into one flight path prediction program. Figure 1 shows results of the post-launch predictions done for the launch by the NASA Explorer Post on October 29, 2005. The output is graphical and each landing location can be selected for additional information. Because the results are plotted in Google Earth, there are additional bonuses such as satellite images, point to point distance measurement, and driving directions. Significance and Interpretation The prediction results are decent approximations of the actual landing location. Unfortunately, it is not accurate enough to be used in place of tracking the payload as it flies. Flight path prediction remains a means of generalizing the landing location. There are too many variables that cannot be taken into account to provide extremely accurate results. Flight path prediction can be used as a tool for selecting a launch location by simulating the flight from various sites. Based on the landing location in each case, an appropriate launch site can be chosen. Also, in the event of loss of communication with the payload during flight, the predictions can be used to approximate the path of the balloon so that back-up systems such as close proximity Morse Code beacons on board the payload can assist in the recovery.

Figure 1. NASA Glenn Explorer Post launch (October 29, 2005). Two post-launch predictions as

well as the actual landing location are shown (along with launch location). The highlighted prediction used interpolated wind data.

47

Page 70: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments The author expresses grateful acknowledgement of the work done by David Snyder in providing a starting point with his flight path prediction algorithm. The author also thanks Dr. Mitch Wolff for support, encouragement, and guidance, the Wright State High Altitude Balloon Team for assistance and flight data, the NASA Explorer Post Balloon Team for flight data, and the University of Cincinnati Balloon Team for flight data. References 1. Von Glahn, Rick. "Program Overview." BallooonTrack for Windows. 10 Dec. 2004. Edge of Space

Sciences. 1 Oct. 2005 <http://www.eoss.org/wbaltrak/>. 2. "Google Earth - Home." Google Earth. 2006. Google. 1 Nov. 2005 <http://earth.google.com/>. 3. Snyder, David B. "Re: Balloon Launch [Fwd: Program]." Email to the author. 19 Oct. 2005. 4. Oolman, Larry. "Atmospheric Soundings." University of Wyoming. 1 Sept. 2005

<http://weather.uwyo.edu/upperair/sounding.html>. 5. Nikši, Hrvoje. "GNU Wget - GNU Project - Free Software Foundation (FSF)." GNU Wget. 2004. 1

Nov. 2005 <http://www.gnu.org/software/wget/>. 6. "Calculation of Distance Represented by Degrees of Latitude and Longitude." University of

Colorado. 1 Feb. 2006 <http://www.colorado.edu/geography/gcraft/warmup/aquifer/html/distance.html>.

48

Page 71: nasa / ohio space grant consortium 2005-2006 annual student

Ecology on Mars: Applying Knowledge of Ecological Relationships to Design a Biosphere on Mars

Student Researcher: Amanda G. Crim

Advisor: George Massa

Cleveland State University

Department of Curriculum and Foundations, Urban Secondary Teaching Abstract Mars has captivated the imaginations of explorers and writers for centuries. People have long dreamed of visiting Earth's sister-planet, in the hopes of learning more about the Earth's history and how life may have evolved; science-fiction writers have imagined futures in which humans have successfully explored and colonized Mars. NASA has taken the first steps toward Mars exploration by sending unmanned satellites and probes to the red planet, but the next goal is to send astronauts to the surface for long-term research. This goal can only be achieved if a functioning biosphere—an enclosed, self-sustaining environment—can be designed for use on Mars' surface. As a final assessment to our unit on ecology and ecosystems my students were asked to address this challenge. Within groups they conducted research to determine what ten astronauts would need in order to survive on Mars for eighteen months, and designed a biosphere based on their research. Each group produced a detailed poster presenting their research and design. Project Objectives To successfully complete this project, students were expected to interpret and apply their knowledge of ecosystems to a real-life problem-solving situation. Students needed to first examine and analyze the task, and then investigate specific solutions to the problem of survival on Mars within a biosphere. This included researching various methods of nutrient recycling (water, CO2, O2, nitrogen), building design to minimize energy loss while withstanding the Martian weather, and plants that can be easily grown and meet the nutrient needs of humans. Once their research was complete, they selected the components they judged necessary for success and designed their biospheres on posterboard. Students labeled and justified the inclusion of each component of the biosphere, supporting their statements with their research. Methodology Used Three sections of 10th grade biology students completed this project over the course of two and a half weeks as a final assessment to a unit on ecology. Students were allowed to choose groups of 4-5 students. Once groups were formed, each student was provided a “Destination: Mars!” packet outlining the groups’ goals for the project. Members chose roles from the list provided (climatologist, geologist, architect, botanist, biologist), with some members taking a minor role (E.g. architect) in addition to a major one (E.g. climatologist). Once assigned a role, each student was responsible for the “role responsibilities” outlined in the packet. The school’s mobile laptop unit was available for seven days for students to conduct research online, and three days of class were devoted to developing the biosphere poster. Students’ posters were assessed following the “poster required elements” rubric provided. Results Obtained Students initially seemed interested in the real-world application of ecological concepts, and excited by the prospect of their work being shown to NASA. Much of the initial enthusiasm seemed to wane as students began to comprehend the amount of research required of them. At the conclusion of the project, student groups produced posters of their biospheres, though adherence to the requirements varied greatly. Several students worked well together and met or exceeded the expectations of the teacher, while others clearly did not utilize research time effectively, and their posters reflected little integration of the concepts.

49

Page 72: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results While a significant amount of scaffolding was provided to students, many seemed to have difficulty solving the problem when given their independence to approach the issue in their own ways. Students likely struggled due to the fact that most of them have attended urban schools throughout their lives, and their instruction has typically been teacher-centered: very seldom have they been asked to take so much responsibility upon themselves. Additionally, this project may have been one of their first serious research projects—requiring research and integration of knowledge, rather than just regurgitation of facts. Overall, they were exposed to an exciting application of ecology and biology, and began to develop basic research skills which will serve them well in the future. Use of mini-research projects prior to undertaking the “Destination: Mars!” project may help ease students into the mind- and skill-set of researchers, thus making this project more enjoyable and less overwhelming. Acknowledgments and References This project would not have been possible without the guidance of Mr. George Massa, my science mentor teacher at Shaw High School in East Cleveland, or the input and constant support of Chad Seys, my partner. Additionally, the template for the “Destination: Mars!” mission letter was adapted from Jay Costanza, a teacher dedicated to science instruction through inquiry.

50

Page 73: nasa / ohio space grant consortium 2005-2006 annual student

Experimental Investigation into the Effects of Velocity and Pressure on Coulomb Friction

Student Researcher: Justin R. Crunkleton

Advisor: Dr. Hazel Marie

Youngstown State University Department of Mechanical Engineering

Abstract The objective of this project is to experimentally determine the Coulomb friction force that develops between metal parts exhibiting displacement with respect to each other. The bodies of particular interest to this research are representative parts of compliant finger seals currently being designed, modeled, and tested for use in the turbo-engine industry. In these seals, as one layer of the seal moves with respect to another, Coulomb friction develops between the two layers. For modeling optimization of the seal’s dynamic motion, knowing the friction characteristics is necessary. Experimental data was collected for friction force vs. relative velocity and vs. normal clamping force. A dynamics model of the motion of the finger seal was developed utilizing the variable coulomb friction force. This will be compared to a dynamics model using constant coulomb friction force. The experimentally determined coulomb friction force is used in a two degree-of-freedom dynamics model. Project Objectives Seals are integral parts of the modern gas turbine engine. They are used in location between stationary and rotating parts; over blade tips; between components and throughout the internal cooling paths. Large turbine engines can have over 50 gas path locations that require sealing of some type. Sealing in modern day engines has become quite important because they are required to operate at higher and higher temperatures and cycle pressure ratios. Poor sealing will lead to poor engine performance. Over the years many types of seals have been created. One of the standard seals by which new seal designs are measured is the labyrinth seal. Labyrinth seals use labyrinths to create an air flow restriction between high pressure and low pressure regions. Labyrinth seals are not compliant and will eventually experience wear and erosion and this will lead to losses in engine performance. The answer to problems experienced with the labyrinth seal was the brush seal. Brush seals utilize rows of dense bristles. Although brush seals are compliant and have a longer lifespan than labyrinth seals, they are susceptible to bristle stiffening and a variety of other shortcomings, such as exhibiting non-compliant properties in certain situations. The answer to the problems of the labyrinth and brush seals is the finger seal. Finger seals demonstrate significant improvements over the problems exhibited by labyrinth and brush seals. The ultimate goal of this project is to develop an analytical model of the Coulomb friction that develops during usage of compliant finger seals. The flexible fingers of the seal lift radially to accommodate shaft excursions and relative growth of the seal (from rotational forces and thermal mismatch). As the layers of the seals move with respect to each other, coulomb friction develops between the layers. A traditional model, NFf µ= , generally assumes the friction to be independent of velocity (dynamic µ is constant) and directly proportional to the normal force. This research will check the appropriateness of the traditional model. Methodology Used Two methods were used to accomplish the goal of this project. The first was a physical experiment, and the second was a computer simulation. The purpose of the first test that was conducted was to show how the friction force between flat plates of steel varied with velocity and normal force. The surface of the sliding plates represents the surfaces of finger seals. The computer simulation first required the construction of a two degree-of-freedom model of the system, which includes the finger mass, rotor mass, stiffness and damping of finger seal stick, stiffness and damping of fluid, and the force applied to the rotor. Once the differential equations were determined, MathCAD was used to show the motion of the rotor and the finger seal with respect to each other.

51

Page 74: nasa / ohio space grant consortium 2005-2006 annual student

Results Obtained The results of the physical experiment can be found in Figures 1 and 2. The mass-spring-damper model and the subsequent free body diagrams for the rotor and finger motion are shown in Figures 3 and 4, respectively. After applying Newton’s 2nd Law to each of the masses in the two DOF model, one obtains

( ) ( ) FFRFFequRFFequFSequFSequ xmxxcxxkxvelcxk &&&&& =−−−−−− )( ( ) ( ) RRRFFequRFFequ xmxxcxxktF &&&& =−+−+)(

Written in matrix form, these equations become

( ) ( ) ⎭⎬⎫

⎩⎨⎧

=⎭⎬⎫

⎩⎨⎧⎥⎦

⎤⎢⎣

⎡+−

−+

⎭⎬⎫

⎩⎨⎧⎥⎦

⎤⎢⎣

⎡+−−

+⎭⎬⎫

⎩⎨⎧⎥⎦

⎤⎢⎣

⎡0

)()(0

0 tFxx

kkkkk

xx

velcccccc

xx

mm

F

R

SequFequFequ

FequFequ

F

R

SequSequFequFequ

FequFequ

F

R

F

R

&

&

&&

&&

Solving the two differential equations, one obtains the expressions for the motion of the rotor and finger seal:

xR t( )Z2 2, F t( )⋅ Z1 2,−

Z1 1, Z2 2,⋅ Z1 2,( )2−:=

xFS t( ) .00075in⋅Z1 2,− F t( )⋅ Z1 1,+

Z1 1, Z2 2,⋅ Z1 2,( )2−+:=

where

( ) ( )sequFequSequFequfoot

FequFequ

FequFequR

kkvelccimZ

kciZkcimZ

++++−=

−−=

++−=

)(*

)()(

22,2

2,1

21,1

ωω

ω

ωω

xR(t) and xFS(t) were plotted against time using MathCAD. The graphs in Figures 5 and 6 show how xR(t) and xFS(t) vary with time. Significance and Interpretation of Results Figure 1 shows that the friction force and the normal force are proportional to each other and have a linear relationship. This agrees with the traditional friction model, NFf µ= . Figure 2 shows that the friction force may not be entirely independent of velocity as it has been previously assumed. The trend appears to be such that the friction force increases with the velocities shown, though it can be approximated as linear over certain velocity ranges. The graphs from the MathCAD simulation are shown in Figures 5 and 6. The graphs show the motion of the finger seal in response to the motion of the rotor. Figure 5 shows the relationship between the motions when coulomb friction is constant and Figure 6 shows the relationship when the coulomb friction is variable with velocity. As illustrated, there is an obvious difference between the two graphs. Figure 6 shows that the motion of the finger seal varies as a result of the variable coulomb friction. As the coulomb friction increases, the motion of the finger seal slows as compared to the rotor motion. And as the rotor comes to halt (at its peak position on the graph), the velocity of the finger seal slows with a consequential decrease in the Coulomb friction. The net force upward acting on the finger seal is thus greater and causes the finger seal to move more quickly away from the rotor. This finding may be significant. If further research shows that the velocity of the finger seal is not completely independent of the coulomb friction, the current dynamics model of the finger seal will have to be changed to accommodate this relationship.

52

Page 75: nasa / ohio space grant consortium 2005-2006 annual student

Figures/Charts

0

0.5

1

1.5

2

2.5

3

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

Normal Force (lb)

Fric

tion

Forc

e (lb

)

Figure 1. Friction Force versus Normal Force.

0.00

0.100.20

0.300.40

0.500.60

0.70

0 1 2 3 4 5 6 7 8 9 10

Velocity (in/s)

Fric

tion

Forc

e (lb

)

Figure 2. Friction Force versus Relative Velocity.

Figure 3. Mass-Spring-Damper Representation of Two DOF Model.

53

Page 76: nasa / ohio space grant consortium 2005-2006 annual student

Figure 4. Equivalent Two DOF Dynamic Model.

Figure 5. Rotor/Finger Response vs. Time (Constant Coulomb Friction).

(vel)

(vel)

54

Page 77: nasa / ohio space grant consortium 2005-2006 annual student

Figure 6. Rotor/Finger Response vs. Time (Variable Coulomb Friction).

Acknowledgments and References 1. M. P. Proctor, A. Kumar, and I. R. Delgado. “High-Speed, High-Temperature Finger Seal Test

Results.” http://gltrs.grc.nasa.gov/. July 2002. 2. Elmer, Franz-Josef. “Nonlinear Dynamics of Dry Friction.” 1997 J. Phys. A: Math. Gen. 30 6057-

6063. http://www.iop.org/EJ/abstract/0305-4470/30/17/015. 3. Braun, M. J., Pierson, H. M., Deng, D., et al. “Structural and Dynamic Considerations Towards

The Design of a Padded Finger Seal.” 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Huntsville, Alabama. July 2003.

55

Page 78: nasa / ohio space grant consortium 2005-2006 annual student

Roller-ragious

Student Researcher: Elizabeth M. Davis

Advisor: Dr. Paul C. Lam

The University of Akron College of Education

Abstract The goal of this project is to help students discover the different aspects of physical science using an exploratory process. The students will be using the Internet to help them explore different features of roller coasters and the laws of physics that govern the construction of these large thrill rides. Once the students have done some research about familiar roller coasters and completed a simulation roller coaster they can begin to build. The students will use everyday object and teamwork to complete a roller coaster that will be able to successfully send a cart (a tennis ball) from the beginning of the first hill to the ext of the third hill. The most important outcome of this project is for students to leave with understanding of particular and necessary role that energy and forces play on some of our favorite and thrilling experiences.

Name: Libbi Davis Date:

Subject Area: Math/ Science Grade Level: 7

Lesson Topic: Roller-ragious Time Allocation: 3-4 class periods

Instructional Goals:

Using technology resources and knowledge of energy conservation and transfer students will be able to construct their own model roller coaster.

Learning Objectives:

1. Students will be able to identify gravitational potential energy and kinetic energy. 2. Students will be able to explain the loss of mechanical energy to heat and friction. 3. Students will be able to explain the conservation of energy. 4. Students will be able to collect data about existing roller coasters. 5. Students will be able to create graphs based on their data collection.

Standards:

Science: Physical Science Benchmark B: In simple cases, describe the motion of objects and conceptually describe the effects of forces on an object. Benchmark D: Describe the energy takes many forms, some forms represent kinetic energy and some forms represent potential energy; and during energy transformations the total amount of energy remains constant. Mathematics: Patterns, Functions and Algebra Standard Benchmark C: Use variables to create and solve equations and inequalities representing problem solutions. Mathematics: Data Analysis and Probability Standard Benchmark A: Read, create and use line graphs, histograms, circle graphs, box-and-whisker plots, stem-and-leaf plots, and other representations when appropriate. Benchmark E: Collect, organize, display and interpret data for a specific purpose or need.

Grouping of Students:

Whole Group- Discussion/ Introduction to Roller Coasters Small Groups (4 students) - Data Collection and building model

Materials:

1. Each Group: 2- 70 cm x 200 cm pieces foam board, cutting supplies, hot glue and gun, or tacky glue, meter stick, ball (e.g., tennis ball). 2. Computers 3. Worksheets

56

Page 79: nasa / ohio space grant consortium 2005-2006 annual student

Prior Knowledge Needed:

1. Students should be able to complete multi-step equations with variables. 2. Students should be able to collect and organize data on a chart or graph. 3. Students should be able to use the computer and Internet to find appropriate information.

Procedures: (Differentiate between what the teacher will do and what the student will do)

Instructional Strategies: 1. The teacher will lead a discussion about student experiences with roller coasters. Questions: 1. Have you even been on a roller coaster? What did you like or dislike about the ride? 2. How fast do you think roller coasters go? Why do some go faster than others? (Help students discover that the height of the first hill is the main factor in affecting the maximum speed). 3. How long do you think the ride lasted? (Help students understand that factors like, length of the track, height and steepness of the drops will affect duration). Learner Activities: 1. Students will use the website Amusement Park Physics—Roller Coaster to create a model roller coaster on the website. Here their roller coaster will be inspected for fun and safety. This will help students to gain ideas for their model. Students will answer worksheet after completing the website module. 2. Students will use the Cedar Point website to gather information about the Magnum XL-200, Millennium Force and one other coaster of their choice (Speed, height of hill 1, 2, 3, (if applicable), length of the track, and duration). Students will chart this information as they choose. 3. Students will work with their groups to create a model roller coaster using foam board, hot glue, and a ball (such as a tennis ball, as the train), meter stick, and cutting supplies (box knife, scissors). A. Students will receive two pieces of 70 cm x 200 cm pieces of foam board for each side of the track. Students will cut their hills from the board, each side must be identical. B. Students will use the excess board to cut out 4 cm x 12 cm rectangles for spacers between the two tracks which the ball will ride down. 4. Students will answer various questions about the design of the coaster on paper before they begin to build. 5. Students will write a reflection stating why or why not, the ball made it through the entire coaster.

Addressing Diversity:

Learning Modalities: Auditory Learners- Students will take part in a discussion that allows students to talk about ideas and decide on the model. Visual Learners- Students will use to module from the Amusement Park Physics—Roller Coaster website to provide a visual example of possible roller coaster designs. Kinesthetic/Tactile Learners- Students will be physically creating a roller coaster to test and share with the class. Special Accommodations: Students with physical restrictions can complete the all aspects of the project with assistance from a tutor or the group members. ADHD students will be able to work on several different tasks; they can have two building tasks with which they can switch between until they are completed.

57

Page 80: nasa / ohio space grant consortium 2005-2006 annual student

Assessment/s:

Before instruction: The discussion will allow the teacher to get an understanding of student knowledge of roller coasters and energy conservations and transformation. During instruction: The websites and follow up questions help the teacher to observe student understanding of energy in relation to roller coasters. After instruction: The students with the greatest total height of all three hills and a completed exit off the roller coaster wins the contest. All the students that have a completed run on the rollercoaster receive all the points. Students will also write a reflection of their roller coaster stating why the ball did or did not complete its run on the coaster.

References: (books, texts, websites)

Books: Texts: Websites: http://www.cedarpoint.com/ http://www.learner.org/exhibits/parkphysics/coaster.html

The Roller-ragious lesson plan uses the idea of cooperative learning and constructivism that allows students to discover energy and forces in relation to the real physical world. Students work in groups and each person must contribute to the group so that the project is completed along with the discovery activities. By reflecting the students own experiences with the real world the students will construct an understanding of the world they live in and the world of adventure they may take for granted. Students will create a physical model to help add to the mental models of energy and forces, which will help strengthen their schemata for physical sciences. Learning requires the understanding of the whole concept not just parts. Instead of having students memorize definitions and do isolated procedures with those definitions, students will use their knowledge of all the several forces and formulas of energy to create their model. They will use their knowledge of the parts to understand a whole concept. Students will use their knowledge to create a meaningful project instead of regurgitating the “right” answer. Student engagement requires motivation from the students to become involved in the project at hand. This activity begins with students talking about their own experiences and using their knowledge and experience with roller coasters to get the project started. Students will be motivated to work on this project because they are involved in an investigation that requires them to call on their own lives and experiences. Student motivation is very important from the beginning of the project because the students will be the driving force of the project. This project allows students to take ownership of their work with the assistance from the teacher when they “get stuck”. Student engagement is directly related to the constructivist theory. Students will be more motivated and engaged in the learning process because they are constructing meaning through the investigation of existing roller coasters and the construction of their own model.

58

Page 81: nasa / ohio space grant consortium 2005-2006 annual student

Score Excellent Fair Poor Participation (Individual) /9

Student was attentive and added to the discussion.

Contributed to the group research and website reviews.

Played an active role and contributed to the design and construction of the coaster.

Student was somewhat attentive and added few comments to the discussion.

Contributed to some of the group research and website reviews.

Contributed to some of the design and construction of the coaster.

Student was not attentive and did not add to the discussion.

Did not contributed to the group research and website reviews.

Did not play an active role and did not contribute to the design and construction of the coaster.

Worksheets: Before, During and After activities (Individual) /30

Completed all the activities.

Explained answers and gave a rationale.

Completed 2 out of 3 of the activities.

Explained some answers and gave a rationale to some of the questions.

Completed one of the activities.

Did not explain answers and did not give a rationale.

Completed Roller Coaster (Group) /45

Roller coaster completed all three hills.

Roller coaster completed two out of three of the hills.

Roller coaster completed one of the hills.

Reflection (Individual) /20

Reflects student knowledge of force and energy.

Clearly explains success of the coaster and how it could be improved.

Reflects some student knowledge of force and energy.

Somewhat explains success of the coaster and how it could be improved.

Does not reflect student knowledge of force and energy.

Does not clearly explain success of the coaster or how it could be improved.

59

Page 82: nasa / ohio space grant consortium 2005-2006 annual student

Problem Based Learning in Mathematics

Student Researcher: James G. Davis

Advisor: R. D. Nordgren, Ph.D.

Cleveland State University College of Education

Abstract Problem based learning (PBL) describes a learning environment that begins with a problem that requires students to gain new knowledge before it can be solved. The students must interpret the problem, gather needed information, identify possible solutions, and evaluate options and present conclusions. Proponents of this approach insist that students become better problem solvers and learn to solve problems heuristically. This strategy encourages students to think critically, present their own creative ideas, and communicate with peers mathematically. This style of learning and teaching is frequently used with great success in the medical profession. Solving problems is what mathematicians have been doing for centuries so why not use this approach to teach mathematics to urban students. Most of the literature on this topic supports the strategy as a very effective way of teaching, if implemented correctly. The effectiveness of this approach depends on student characteristics, classroom culture, and, of course, the problems used. Project Objectives The primary objective was to implement a problem based teaching strategy for several ninth grade algebra classes and determine if it was effective in improving achievement and increasing interest in mathematics. A secondary objective was to improve collaborative group work skills while covering information that would help prepare the students for the upcoming Ohio Graduation Test (OGT). A third objective was to use NASA educational materials as much as practical in this research project. Methodology Used The research began with administration of a pretest to determine students’ baseline achievement level. A final test on the text chapter related to solving algebraic equations was used because we were just finishing this chapter. An inventory on learning styles and teaching method preferences was then administered to determine the students’ attitudes about learning and methods related to teaching mathematics. A vital part of problem-based learning is collaborative groups. This part of the intervention required additional training on how to function in this type environment. Handouts were provided to the students, and reviewed by the teacher on collaborative group work and on problem solving strategies. Several projects were selected from NASA materials and the internet to help train students on problem solving and improve their group work skills. Students were allowed to select their own group for each project to minimize conflicts and prevent animosity. Projects were chosen to coincide with text materials and OGT standards. Projects lasted two to three days depending on their complexity and the amount of instruction needed to ensure the material was effectively covered. The Projects are listed below:

• Paper Rockets - NASA Project • Rocket Racer - NASA Project • Measure Height - NASA Project & Methods combined • Bungee Barbie – Equations & Graphing

As indicated, I used OSGC materials in the first three projects. I modified the first two only slightly to run additional trials and have the students compute the mean, mode, median, and range of the trial data. I did this to enhance the mathematics and ensure this included statistical analysis to help students prepare for the OGT. The third project was combined with a project from my methods class due to their similarity.

60

Page 83: nasa / ohio space grant consortium 2005-2006 annual student

After the projects were completed, a final problem was introduced and the students were required to solve this problem in collaborative groups. It required students to estimate the number of jellybeans contained in a 2-liter soda bottle. The problem was based on Fermi Questions obtained from the Fermi Website. There were several methods available to students to solve this problem. They were not allowed to simply guess. They had to solve it mathematically and were required to present their findings and solution methods to the class upon completion. At the end of the intervention, a final test was given to determine if students’ achievement had improved. They were given the same inventories as before and asked to complete them again, to see if their attitudes had changed. Data from pretest and posttest was compared. Results The class average on the posttest improved by 4 percent. There were some slight changes in average scores on some of the inventory questions. There was significant absenteeism during the intervention making the project difficult to implement and control. Many group members were shifted during the projects to account for the absenteeism. Some groups acted as like playtime had begun instead of focusing on the project or problem. It was difficult to keep all groups on task. Significance of Findings The pretest and posttest were not identical so the small improvement cannot definitively say that overall achievement improved. Both tests’ averages were near the overall class average on other tests given this year. The students did appear to enjoy the intervention and several teachers and administrators in the school noticed the projects and constructivist methods employed for this intervention. There was a slight increase in the number of students who preferred working in small groups. There was also a small increase in the number of students who preferred working alone. Questions related to attitudes about mathematics showed little change on the post intervention inventories. This can be attributed to the short duration of the research and the lack of previous training in collaborative efforts. There is considerable skill needed to effectively implement this type of strategy. The students must be well trained in collaborative working situations and the projects must be chosen carefully to engage all the students. The atmosphere and culture at this school was not conducive to this type of teaching strategy. The students had little prior training in collaborative work. Previous group efforts had turned most of the students against group work due to poor implementation and training. A significant cultural shift in teaching methodology will be needed to foster a spirit of cooperation and improve students’ attitudes related to this type of learning environment. Acknowledgments This project was made possible by my mentor teacher, Mr. Michael Losik. Without his approval and cooperation, it would have been impossible to prepare for and complete the projects and research. Dr. R. D. Nordgren continuously supported me and kept after me to ensure I was on tract and that everything was going as planned. The staff at Shaw High School supported this effort, and several teachers asked to cooperate with me on this project and use some of the information provided by NASA in their classes. References 1. Roh, K. H. (2003). Problem-based learning in mathematics. ERIC Clearinghouse for Science

Mathematics and Environmental Education (ERIC Document Reproduction Service No. ED482725). 2. Erickson, D. K. (1999). A problem-based approach to mathematics instruction. The Mathematics

Teacher, 92(6), 516-525. 3. Savery, J. R., & Duffy, T. M. (2001). Problem based learning: An instructional model and its

constructivist framework (CRLT Technical Report No. 16-01). Bloomington: Indiana University, Center for Research on Learning and Technology.

4. Confrey, J., Piliero, S. C., Rizzuti, J. M., & Smith, E. (1990). High school mathematics development of teacher knowledge and implementation of a problem-based mathematics curriculum using multirepresentational software (ACOT report number 11). Apple Classrooms of Tomorrow Research.

61

Page 84: nasa / ohio space grant consortium 2005-2006 annual student

Mechanical Breakaway System for Safety Verification on the Subject Load Device for the Enhanced Zero Gravity Locomotion Simulator

Student Researcher: Arati V. Deshpande

Advisor: Gail Perusek

The Ohio State University

Biological Fluid Physics Department Abstract Bone loss is one of the leading dangers of prolonged space flight. The loss of bone density during space flight is vastly larger than that of a post-menopausal woman, and can result in decreased bone strength and increased risk of fractures in astronauts during long-duration missions. With this problem in mind, NASA GRC is collaborating with the Cleveland Clinic Foundation to build a system to have a countermeasure exercise unit for understanding how bones are loaded in a “Zero G” environment. Dubbed the “enhanced Zero Gravity Locomotion Simulator” (eZLS), the system has been created to understand the optimal amount of bone and muscle loading from exercise in space-type setting. Project Objectives A treadmill will be set-up vertically with a human test subject hanging by bungees in a supine position. The person will be running at this position, and this position gives the body the feeling of weightlessness. A subject harness attached to cables and two linear motors will keep the volunteer from drifting away from the treadmill, and provide varying degrees of “gravity replacement” loads. This system is located behind the treadmill and the cables attach it to the harness at the hips of the running test subject. The movement of the motor is strictly in a horizontal motion. This motor is controlled electronically and its job demands no error. The danger that arises is the fact that if the motor, for any reason, takes on a larger load, then the person running can potentially be forced toward the treadmill. There are redundant safety shutdown mechanisms which operate electronically, but in the event of a system failure, a dissimilar redundancy incorporating a mechanical breakaway device, is desired. The solution is to create a mechanical safety device that can disconnect the volunteer running and the linear motor if a larger load is given to the person. With the situation in mind, a design for a breakaway system that would be light, efficient and practical was required. Methodology Understanding Velcro as a light material and one that is widely used in the a various applications, the breakaway system has been designed to withstand a certain amount of load, any greater amount of shear force exerted would cause the Velcro to separate and safely disconnect the volunteer from the linear motor. The first experiment held the Velcro design in a position where the Velcro template was made according to the guidelines set up by the company. The second experiment conducted did not hold with a lack of substantial adhesive to hold the back of the Velcro together. The third experiment proved to have a controlled, efficient, and strong design. Results Testing with the first experiment illustrated the fact that the model was not accurate. The weight the Velcro was supposed to hold did not correspond to its square inch overlap. The rated capacity was only 40% of the total capacity claimed. An alternate model would have to be invented. The second test conducted proved to not hold at all, the adhesive was not as strong as the sewn Velcro. Even with the strong rubber cement, the Velcro did not seem to hold to itself. The third experiment proved to be the most efficient method and design for the breakaway device. The third design did not need adhesive or a sewing machine to strap the Velcro to a strong hold. It was latched onto itself. The breaking point at the D-ring overlap was roughly 140 lbs. Also at a 12 inch overlap of the Velcro, the shear force exerted proved to be roughly 122 lbs.

62

Page 85: nasa / ohio space grant consortium 2005-2006 annual student

Force vs. timey = 6E-15x6 - 2E-11x5 + 2E-08x4 - 2E-05x3 + 0.0042x2 - 0.174x + 6.1449

R2 = 0.9991

0

20

40

60

80

100

120

140

0 150 300 450 600 750 900 1050

Time, sec

Forc

e, lb

s

Interpretation and Significance The detachment of the Velcro was fast and attested to be a fine method breakaway device. However, differences in results between the three experiments may have resulted from difference in design, and the first experiment may have had a peel-away force instead of the shear force, which may have caused the lack of hold. Peel-away force has a substantially lower grip than the shear force. The second experiment did not hold at all, the problem was with the design, the lack or strong adhesive caused the Velcro to simply slip off. The third experiment confirmed that only shear force was the only acting force and with the double-parallel latched to self design, no problems seemed to be from the model. Another point to take in mind when conducting the experiments was that repeatability is difficult. The Velcro experiment can be used for any other part of the eZLS, it can be integrated as part of the harness and areas of the suspension. The sharp difference between the experiments shows the need for further research. Acknowledgments The author of this paper would like to thank NASA Glenn Research Center, Darla Kimbro, and Susan Gott for giving her the opportunity to conduct this research. The author would also like to thank Gail Perusek and Sergey Samorezov for their expertise and technical support. She would like to thank Roberto Baez as well for his help in experimentation and analysis. References 1. Kohrt, W. M., Bloomfield, S. A., Little, K. D., Nelson, M. E., Yingling, V. R. et al. American

College of Sports Medicine.Med Sci Sports Exerc. 2004 Nov;36(11):1985-96. 2. Turner, C. H., and A. G. Robling. Designing exercise regimens to increase bone strength. Exerc.

Sport Sci. Rev. 31:45-50, 2003.

63

Page 86: nasa / ohio space grant consortium 2005-2006 annual student

Creating a Self-Sustaining Ecosystem

Student Researcher: Jeremy P. Deyoe

Advisor: Dr. Diane Corrigan

Cleveland State University Urban Secondary Teaching

Abstract The red planet has always been a mysterious and desired astronomical destination. With our recent technological advances and our newly formed mission to return to the moon, a trip to Mars seems to be soon in our future. Due to the unique orbits of Earth and Mars, the trip would take approximately six months each direction but explorers would have to wait a year before they could attempt to return to Earth. Research of Mars would be far more successful with the construction of a space station in which astronauts could live and base their research. The design of this structure was previously set up by engineers, but the addition of a biological engine that would be necessary to drive biogeochemical cycling needs to be designed and explained. My class of ninth grade biology students attempted to do just this. We thoughtfully designed and explained our biological engine that would promote a stable environment for life on another planet and keep our ecosystem thriving for generations to come. Project Objectives The purpose of this lesson was to use an authentic method of assessing students’ knowledge while showing students how all the sciences tie together. This project was a culmination of an Ecology unit I had prepared for my students as well as a way to replace a multiple choice exam. This was a group project which encouraged conflict resolution and democracy between students in my diverse classes. The project also fit the Ohio Department of Education academic content standards for grade 10 Life Science, Benchmark B, standard numbers 14, 15 and 16. Methodology Used This was the final project for our Ecology unit. Students were provided support and encouragement throughout the project to increase student compliance and success with the activity. Students were made aware of the expectations and importance of this project. As a class, we sat down and reviewed several important points including the due date, objectives, rationale and requirements for the project as to again promote the successful completion of this project. Students were then given the go ahead to work on this project up until the due date that was just mentioned. The self sustaining ecosystem was given as a fictional way to make a biological engine that would keep all the organisms alive and allow the biogeochemical cycles flowing. Each group was part of the scenario in which they were a biological engineering company that had bid on this NASA project to build a biological engine to support the research station on Mars. The value at which each company bid was used as a limitation for what they could buy or bring to Mars. Due to the size of our spaceship, each group was also given a size limitation and a weight limitation for the supplies that they were going to bring. Each one of these budgets had to be maintained and the final values could not be over these numbers or the biological engineering company would go bankrupt. Each group was given a list of items that could be purchased. Each item was labeled with the cost per unit, the size per unit and the weight per unit. The group was asked to produce a list of items with the total cost, weight and volume of each item as well as the combined total for all purchased items. This list was one of five items that was to be given as the final proposal. The four other items were based on the list that was developed. The proposal would also include an ecological pyramid with all the organisms in the appropriate trophic levels, a food web, a list of the biotic and abiotic factors in the ecosystem, and a drawing of the ecosystem using symbols.

64

Page 87: nasa / ohio space grant consortium 2005-2006 annual student

The groups were given time to work together to problem solve how they were going to approach the project including what to do first and how they were going to break the work up. I maintained the role of a facilitator, answering questions when necessary and helping to guide students in the right direction, but at no time did I give examples of correct answers or tell them exactly how to finish the work. The student groups were independent and members of each group were asked to work together in a way that would create a unique ecosystem. Findings The assessment was a successful alternative to giving a test and many of the students seemed to enjoy the work more than they would have a test. There were some problems that arose during the project, but the project was successful in evaluating students’ knowledge of Ecology while teaching them several other important life skills in the process. Significance of Findings Using a project such as this to engage students in the sciences was very successful. The cooperation and exchanging of different views and ideas required of working together in teams to successfully complete the task was also very valuable. Many of the students were active leaders and worked well within their groups. The project did not get the same negative feedback that tests received. This authentic assessment was a valuable tool in assessing students’ learning. During the course of the project, I noticed that this form of authentic assessment promoted several other skills that I find important in student development. Students worked in groups in order to complete the task and were forced to use democracy within their group to organize and distribute materials. Time management was also a key skill that was needed to be successful. Many students delayed and were unable to complete the entire project on time. The third valuable skill that I noticed was problem solving. After being given the project, students were asked to get started with only the basic directions as opposed to step by step instructions on how they must proceed. Each individual group was to determine how to proceed. The mixing of the sciences gave me an opportunity as a teacher to connect and elaborate on concepts that we had already learned in biology and explain how they related to other sciences. I also found some astronomical misconceptions that we were able to talk about as a class when they came up. With the many positive aspects of this authentic assessment there were still negative aspects. Many students did not know how to proceed when given only basic instructions. Step by step instructions were demanded or the project would not get done. Several groups did not turn in a project, but the main reason seemed to be that they did not want to work. Acknowledgments This lesson would have not been possible without the help of John Silva of Valley Forge High School, Dr. Diane Corrigan from Cleveland State University, and Dr. Jane Zaharias from Cleveland State University.

65

Page 88: nasa / ohio space grant consortium 2005-2006 annual student

Kinematics and Dynamics Analysis of NASA’s Robonaut

Student Researcher: Christopher A. Dodson

Advisor: Dr. Robert Williams, II

Ohio University Department of Mechanical Engineering

Abstract In order to successfully design and control a robotic manipulator, it is first necessary to understand both the kinematics and dynamics of that manipulator. This project sought to develop a Matlab code that would simulate both the kinematics and dynamics of Robonaut, a humanoid robot being developed at NASA’s Johnson Space Center. The kinematics and dynamics analysis conducted was limited to the seven degree of freedom arm only. The kinematics analysis was performed by first creating transformation matrices relating position and orientation of each reference frame to the others. This was used for simulation and plotting. Next a resolved rate control scheme was implemented by determining the Jacobian matrix and commanding joint rates based on Cartesian rates. The rest of the kinematics analysis was performed using a Newton-Euler recursive algorithm, in which the kinematics is calculated outward link by link. For dynamics an inward iteration was performed, beginning at the end effector, using the equations of motion for each link. A simulation was then performed to examine the ability to control such a manipulator using a resolved rate control scheme, and joint positions, joint rates, joint torques, and end effector position was calculated and displayed graphically, along with a 3D plot of the manipulator links. A singularity analysis was also performed to search for undesirable link configurations. Objectives The objectives of this project are to investigate the kinematics and dynamics of the robotic arm of NASA’s Robonaut. Theoretical analysis will be formulated into a Matlab code to perform a simulation of the motion of the arm by resolved rate control. An animation and plots for joint angles, rates, and torques will be performed, and the effectiveness of the method used will be evaluated. Introduction Robonaut is NASA’s humanoid robot designed at the Johnson Space Center to assist astronauts in extravehicular activity. An operator on the ground or inside a spacecraft is linked to the robot through the use of a virtual interface. Robonaut is designed to perform actions equivalent to, or sometimes surpassing, that possible by a spacewalking astronaut. Each arm contains 7 degrees of freedom, similar to a human arm, and is controlled by an internal CPU that directs the robots links in accordance with input from the operator. Methodology Forward Pose Kinematics The pose of the various links of the arm, that is, the position and orientation of each reference frame, can be determined using what are called transformation matrices. To formulate these matrices it is necessary to define both the kinematic diagram and, from this, the Denavit-Hartenberg (DH) parameters. The four DH parameters will be defined as follows [1]: 1−iα : Angle between 1−iZ

)to iZ

)measured about 1−iX

)

1−ia : Distance from 1−iZ)

to iZ)

measured along 1−iX)

id : Distance from 1−iX)

to iX)

measured along iZ)

iθ : Angle between 1−iX)

to iX)

measured about iZ)

66

Page 89: nasa / ohio space grant consortium 2005-2006 annual student

The kinematic diagram of Robonaut is shown below in Figure 1, and the tabulation of the DH parameters of the arm is shown in Table 1. Both the diagram and list of DH parameters were supplied by Dr. Robert Ambrose of the Dextrous Robotics Laboratory at the NASA Johnson Space Center.

Table 1. Arm DH Parameters [3]

Joint Index i

Link Length 1−ia

Twist Angle 1−iα

Joint Offset id

Joint Angle iθ

1 0 0 o 11.75” θ1

2 -2.0” -90o 0 θ2

3 +2.0 +90o 12.75” θ3

4 -2.0 -90o 0 θ4

5 +2.0 +90o 13.0” θ5

6 0 +90o 0 θ6 + 90o

7 0 -90 o 0 θ7 + 90o Using these parameters it is possible to create a transformation matrix for each link using the expression shown in Equation 1 [1].

⎥⎥⎥⎥

⎢⎢⎢⎢

⎡−−

=−−−−

−−−−

1000

0

1111

1111

1

1

iiiiiii

iiiiiii

iii

ii dccscss

dsscccsasc

Tαααθαθαααθαθ

θθ

(1)

Using these matrices one can know the orientation of the new reference frame, described within the upper left 3 x 3 matrix, as well as the position of the new frame’s origin, described by the 3 x 1 matrix in the upper right. Both the orientation and the origin position of the reference frame for each link will be needed in both the kinematics and dynamics analysis, and will be referred to later. Resolved Rate Velocity and Acceleration There are many different ways to determine the required joint rates of a manipulator. One such method is the analytical resolved-rate control scheme. This method involves making use of the relationship between the Cartesian velocity or acceleration being commanded to the manipulator and the required joint rates. To map the Cartesian velocities to angular velocities a matrix called the Jacobian is used. The mapping just described for a non-redundant manipulator is described by Equation 2 below.

θ&& JX = (2)

whereθ& is a column vector listing all the individual joint rates. The Jacobian matrix can be calculated in the manner shown below in Equation 3.

Figure 1. Kinematic diagram of Robonaut arm [3]

X4 X5

Y4

Y3

X3

X7Z5Y6

Y7

Z4

Z3

X2

Y1

Y2 X1

Z2

Z1

Z6

X6 Z7

0,h

Y5Y0,LZ0,L

X0,L

t

X4 X5

Y4

Y3

X3

X7Z5Y6

Y7

Z4

Z3

X2

Y1

Y2 X1

Z2

Z1

Z6

X6 Z7

0,h

Y5Y0,LZ0,L

X0,L

Y0,LZ0,L

X0,L

t

67

Page 90: nasa / ohio space grant consortium 2005-2006 annual student

( ) ( ) ( )⎥⎥⎦

⎢⎢⎣

⎪⎭

⎪⎬⎫

⎪⎩

⎪⎨⎧ ×⋅⋅

⎪⎭

⎪⎬⎫

⎪⎩

⎪⎨⎧ ×⋅⋅

⎪⎭

⎪⎬⎫

⎪⎩

⎪⎨⎧ ×=

Nk

NNk

Nk

ik

Nik

ik

kN

kkk

ZPZ

ZPZ

ZPZJ )

)

)

)

)

)

1

11 (3)

where Z

) is the unit vector in the z-direction, P is a position vector, i is the link being analyzed, k is the

current reference frame, and N is the frame of the end effector. The unit vector Z)

is determined by Equation 4.

iik

iik ZRZ

))= (4)

where Rki is the rotation matrix relating the orientation of frame i to frame k, and is equal to the upper 3 x

3 matrix of the transformation matrices.

To get the required joint rates Equation 2 will be solved forθ& , Equation 5 below.

XJ && =θ (5)

However, use of Equation 4 only holds for Jacobians that are square. For redundant manipulators the number of columns will be greater than the number of rows, so what is called the Jacobian pseudo-inverse will be used instead, and can be calculated as shown in Equation 6.

( ) 1* −= TT JJJJ (6)

Where TJ is the transpose of the Jacobian. The pseudo-inverse simply manipulates the Jacobian such that it is square. For the resolved rate control scheme initial joint angles are first assigned and the Cartesian space vector X defined. Next the Jacobian, Jacobian transpose, and Jacobian pseudo-inverse are calculated and θ& calculated for the initial time step (which is defined at the beginning of a “for” loop). The new joint angles are then calculated by incrementing the initial angles by the multiple of the joint rate iθ& and the time step dt, Equation 7.

dtii ⋅+=+ θθθ &1 (7)

This iteration is performed for each joint angle, and to simplify joint acceleration analysis the commanded Cartesian space acceleration is assumed to be zero. To calculate joint acceleration, the difference between the current and most recent joint rate, as well as the time step, was used according to Equation 8 below.

( )dtii 1−−= θθθ&& (8) Kinematics When considering the kinematics of a generic link i in frame i , the following Newton-Euler recursive algorithm was used, beginning with the ground link outward to the end effector. For each expression ω and α will be used to denote angular velocity and acceleration, respectively, of the entire arm, whereas θ& and θ&& will denote angular velocity and acceleration of the joint angle. The algorithm used is shown below in Equations 9-13.

111 +++ += iiii Z)

&θωω (9)

11111 +++++ +×+= iiiiiii ZZ)

&&)

& θθωαα (10)

( )111 +++ ××+×+= ii

iiii

iii PPaa ωωα (11)

( )11

1111

111 ++

++++

+++ ××+×+= Cii

iiCii

iiCi PPaa ωωα (12)

68

Page 91: nasa / ohio space grant consortium 2005-2006 annual student

Inverse Dynamics Free-body diagrams of links i-1 and i are shown below in Figure 2, and serve to illustrate the forces acting on the link both at the joints and the center of gravity.

Figure 2. Illustration of forces and torques on link i

In this diagram F is the force and N is the moment acting about the center of gravity, f is the force and n is the moment acting at the joint, andτ is the torque required by each joint. Solving the dynamic equations of motion for link i gives the following algorithm, Equations 13-17, that begins at the end effector and proceeds inward to the base frame.

111 +++ = Ciii amF (13)

1111 ++++ ×+= iC

iiC

i IIN ωωα (14)

iii Fff += +1 (15)

iiii

iCii

ii NfPFPnn +×+×+= +++ 111 (16)

iii Zn ⋅=τ (17) For the above equations IC is the inertia tensor about the center of gravity and is described by Equation 18.

⎥⎥⎥

⎢⎢⎢

⎡=

zz

yy

xxC

II

II 0

0

0

0

00 (18)

where xxI , yyI , and zzI are principal moments of inertia in the x, y, and z axes, respectively. Results 3D Plot of Arm Plots of the initial manipulator configuration, as well as all recorded values for joint angles, rates, torques, singularities, as well as the output motion traced by the end effector are shown in Figures 3-9. Note that the desired trace of the end effector was that of a circle.

69

Page 92: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. 3D Plot of arm initial position.

Joint Angles

Figure 4. Plots of joint angles.

Shoulder

Upper Arm

Forearm

Hand

70

Page 93: nasa / ohio space grant consortium 2005-2006 annual student

Joint Rate: Velocity

Figure 5: Plots of joint angular velocity.

Joint Rate: Acceleration

Figure 6. Plots of joint angular acceleration.

Joint Torques

Figure 7. Joint torques.

71

Page 94: nasa / ohio space grant consortium 2005-2006 annual student

Singularity Analysis

Figure 8. Determinant of the Jacobian.

Output Motion of End Effector

Figure 9. Motion of end effector.

Discussion and Conclusions In comparing the output motion to that of the desired motion (a circle), it can be seen that while the path traced is circular, there is clearly configuration issues preventing the manipulator from achieving the desired path. If the plots for the joint rates, and that of the singularity analysis, are observed it can be assumed that the configuration of the manipulator was very close to singularity conditions, in which joint rates and torques spike. This is undesirable, and future research will attempt to minimize the occurrence of singularities in the motion to better control the manipulator. Acknowledgments Special thanks to Dr. Robert Williams of Ohio University, project advisor, and Dr. Don Ambrose of the NASA Johnson Space Center, who supplied the information regarding the DH parameters and configuration of Robonaut. Their help is sincerely appreciated. References [1] Williams, Robert L., “ME 604 Advanced Robotics Class Notes.” [2] Craig, John J., Introduction to Robotics: Mechanics and Control, 3rd ed., Prentice Hall. 1-200. [3] Ambrose, Don, “Robonaut Manipulator Parameter Arm Data,” 2000.

72

Page 95: nasa / ohio space grant consortium 2005-2006 annual student

Protection and Conversion Coatings

Student Researcher: Eric B. Dolence

Advisor: Dr. Jorge Gatica

Cleveland State University Department of Chemical Engineering

Abstract Conversion coating applications in the automobile and aerospace industry are gaining momentum as an alternative to chromate-based processes. This technology is particularly promising when combined with the well-researched area of catalysis. Our research is based on the catalytic effect of transition metals on high molecular weight organic phosphates. However, in order to accurately produce a desired chemical or conversion coating on the transition metal, the physical properties and mass transport phenomena must be understood and controlled. In order to develop this understanding, a commercial computational fluid dynamics package will be used to simulate the system being used to produce the coatings. From this research, we have found that the primary mass transfer mechanism is diffusion and the system at steady state operates at constant temperature and flow. Project Objectives The system we are using to research the chemical vapor deposition and coating process consists of several components. The focal point of the research is a small coupon of the metal being investigated as a substrate for the deposition. Aluminum and aluminum alloys are the primary metals being investigated, however, other alloys such as stainless steels and cast iron are also used. This coupon is placed on a cast iron host plate, which has a small well on one side containing the organic phosphate such as tert-butylated triphenyl phosphate. This host plate with the organic phosphate and the metal coupon is placed in a ceramic tube, which is heated by an electric heater. After the process reaches steady state and the coating is deposited, the system is cooled and the metal coupon is examined closely with relevant data being collected. Our research on catalysis in chemical coating production is a large and arduous task for any one person to complete. As a result, the project has been divided into several separate portions. Each of these portions will be handled by a single researcher or a small team of researchers. Though each portion is analyzed separate from the other portions, the team maintains high levels of communication both with each other and with the project advisor in order to efficiently complete the project. As a member of the team, my assignment is to focus on the mass transport and physical aspects of the process, which is of paramount importance in order to understand and control the deposition process and the accompanying chemical reactions. The catalytic activity occurring on the metallic substrate surface can be confined to the catalytic surfaces and its analysis and performance will be significantly affected by transport and fluid flow phenomena. These phenomena have significant impact on the coating produced as well as the catalytic reaction that occurs on the substrate surface. In order to understand the mechanisms that the process uses to produce the coatings, I have used the Fluent computational fluid dynamics program to find physical aspects of the system, including velocity and thermal profiles near the host plate and at the coating surface. Methodology Used Much like the vast majority of programs used in the engineering field, Fluent is a very complex and detailed program that requires a very specially designed model to simulate. In order to model the system, a preprocessor called Gambit is used to develop a “mesh” model of the system, which defined physical size and shape as well as basic system parameters such as a system input. Gambit is operated as a computer aided drafting program to create a wire frame model of the system. Once the base of the system is designed, the model is “meshed” through one or several meshing models. “Meshing” is a term used by

73

Page 96: nasa / ohio space grant consortium 2005-2006 annual student

Fluent to describe the process of creating a system model with defined points of interest and accurate dimensions that Fluent can use to perform calculations and to determine physical aspects of the system at these points, such as the temperature, fluid velocity, and pressure. Once the mesh model was created, it was imported into Fluent and the system parameters were defined and simulations were run. Among the parameters that were defined were energy and mass inputs to the system. The system was further specified by identifying applicable equations, which required a good understanding of the system in order to have knowledge of appropriate assumptions that can be made. With the system model properly imported and all parameters defined, simulations of the system were run. All simulations were run on a Dell Optiplex GX150, which has a 1 GHz Intel Pentium III processor and 512 MB of system RAM. These simulations were run as iterations approaching steady state and took up to 3 hours to complete. The results generated were then input to Microsoft Excel and MatLAB for further interpretation. Results Obtained Data generated through Fluent show that there is very minimal flow at the surface of the coating production. Flow profiles have consistent velocities of the magnitude of 10-5 m/s in the direction of the solid substrate with no boundary layer being formed at the surface of the transition metal. Thermal fields in the vicinity of the host stage are constant throughout the deposition region. These temperatures were constant over a range of temperatures that the simulation was performed at, which are between 200°C and 500°C. All findings were generated from the model operating at steady state. Significance and Interpretation of Results Velocity profiles throughout the system, particularly near the coating surface, are found to be low enough to be considered negligible. If these flow fields were the primary mass transfer mechanism, it would take the vaporized aryl phosphate over an hour to travel approximately 5 cm to the surface of the substrate, which is longer than the process is run to produce the coating. From this knowledge and the data generated by Fluent, it is apparent that the primary mass transfer mechanism is diffusion. This also indicated that there is little to no boundary layer formed by the air flow, which is consistent with experimental results found by other members of the research team. Thermal fields around the deposition show no significant areas of high or low temperature with respect to the region as a whole. This information can be used in determining the reaction mechanism and reaction kinetics. The constant thermal field also shows that the vaporized aryl phosphate does not undergo any additional reactions as a result of thermal fluctuations. Acknowledgments and References More information about this project and the current project members can be found at <http://www.csuohio.edu/chemical_engineering/people/jeg/CREGroup/current_assist.htm> This project was originally developed by John Reye, a former graduate student at Cleveland State University in the department of Chemical Engineering, as a master’s degree thesis in 2000.

74

Page 97: nasa / ohio space grant consortium 2005-2006 annual student

Internet Measurements of Packet Reordering

Student Researcher: Shavon Juanita Pauline Edmonds

Advisor: Dr. Edward Asikele

Wilberforce University Electrical Engineering Department

Abstract The increase in link speeds, increased parallelism within routers and switches, QoS support and load balancing among links, all point to future networks with increased packet reordering. Unchecked, packet reordering will have a significant detrimental effect on the end-to-end performance, while resources required for dealing with packet reordering at routers and end-nodes will grow considerably. A formal analysis of packet reordering is carried out and Reorder Density (RD) metric is defined for measurement and characterization of packet reordering. RD captures the amount and degree of reordering, and can be used to define the reorder response of networks under stationary conditions. Properties of RD are derived, and it is shown that the reorder response of the network formed by cascading two subnets is equal to the convolution of the reorder responses of individual subnets. Packet reordering over the Internet is measured and used to validate the derivations. Introduction of Packet Reordering Packet Reordering happens when numbers of packets that are in order, (that are being sent through a transmission on the internet) become out of order when received by the recipient. To be able to understand these concepts you must be able to understand the basics. For example, what is a packet? Everything you do on the Internet involves packets. For instance, every Web page that you receive comes as a series of packets, and every e-mail you send leaves as a series of packets. Networks that ship data around in small packets are called packet switched networks. On the Internet, the network breaks an e-mail message into parts of a certain size in bytes. These are the packets. Each packet carries the information that will help it get to its destination -- the sender's IP address, the intended receiver's IP address, something that tells the network how many packets this e-mail message has been broken into and the number of this particular packet. The packets carry the data in the protocols that the Internet uses: Transmission Control Protocol/Internet Protocol (TCP/IP). Each packet contains part of the body of your message. A typical packet contains perhaps 1,000 or 1,500 bytes. Each packet is then sent off to its destination by the best available route -- a route that might be taken by all the other packets in the message or by none of the other packets in the message. This makes the network more efficient. First, the network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. Second, if there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message. However, there is not an insurance that the packets sent on this route will be in order when comes to its destination. The reasons for out of order arrival of packets include but are not limited to: (i) packet striping at layer 2 and 3 links, i.e., when an earlier packet is placed in a longer queue and later packet in a shorter queue, the packets may arrive out of order [4,7], (ii) retransmissions on wireless links [3] and due to TCP, (iii) diffServ scheduling where the flow that exceeds the constraints, e.g., the non-conformant packets are dropped or given a lower priority leading to the packet placement in different queues resulting in an out-of-order delivery [6], and (iv) route fluttering where for example a route may oscillate due to dynamic load splitting among the links. In such cases, different packets of the same stream take different routes leading to different delays [12]. Significantly, packet reordering makes an impact on applications based on both TCP and UDP. In the case of TCP, when packets in forward path go out of order, the receiver may perceive packets as lost, resulting in a reduced congestion window, and increased number of retransmissions [4,5] that further degrade the performance. Reverse-path reordering, i.e., reordering of acknowledgements, results in the

75

Page 98: nasa / ohio space grant consortium 2005-2006 annual student

loss of TCP’s self-clocking property, leading to bursty transmissions and possibly to increased congestion [4]. Approaches for mitigating the impact of out-of-order packet delivery on TCP performance include adjusting ‘dupthresh’ parameter, i.e., the number of duplicate ACKs to be allowed before classifying a following non-acknowledged packet as lost [19]. In delay sensitive applications based on UDP, e.g., IP telephony, an out-of-order packet that arrives after the elapse of playback time is treated as lost thereby decreasing the perceived quality of voice. To recover from reordering, the out-of-sequence packets are buffered until they can be played back in sequence to the application. Thus, an increase in out-of-order delivery by the network consumes more resources at the end-hosts, and also affects the end-to-end performance of the applications. Researchers are making an attempting to address this issue at intermediate nodes, at IP level. Many contemporary routers attempt to eliminate the reordering caused by the scheduling schemes within these nodes by either a) input reordering, i.e., identifying the individual streams and forwarding the packets of same stream to the same queue thus preventing reordering, or b) output reordering, i.e., buffering packets at the output of the router to ensure that the packets belonging to the same stream are released in order of their entry into the node [9]. For example, the network processors from vendors such as IBM, Motorola, Vitesse, TI and Intel, have built-in hardware to track flows. While these approaches reduce the reordering that occurs inside a router, they cannot eliminate reordering due to multiple paths. Furthermore, the complexity of these approaches will increase significantly as the number of parallel flows in a pipe increases (due to the need to keep information on a large number of parallel flows), and as the ratio of packet time to routing latency decreases. This report searches a reason behind reordering of packets and proposes a metric, Reorder Density (RD), for measuring reordering in a packet sequence. This metric captures the magnitude and statistical properties of reordering occurring in a network. The RD of the sequence leaving the network corresponding to an in-order input packet sequence is defining as the reorder response of a network. This report basically intells the importance of how to measure packet reordering in the Internet environment and how to report these measurements in a way that can be useful to the human eye. Methods and Materials The method that is proposed to solve the packet reordering problem is the RD (Reorder Density) metric. The percentage of out-of-order packets has been used as a metric for describing packet reordering. This method is ambiguous, imprecise and does not provide information about the nature of reordering. One can argue that a good packet reordering measure should capture such effects, but a retract argument can also be made that packet reordering should be measured strictly with acceptance to the order of delivery and should be application independent. A framework for metrics presented in 13 states that “The metrics must be useful to users and providers in understanding the performance they experience or provide.” A metric for capturing out-of-order nature of a packet sequence ideally should have the following properties:

• Simplicity: The measure should be simple, yet contain enough information to be useful. • Orthogonality: Metric should, to the extent possible, be independent or orthogonal to other

phenomena that affect the packet streams, e.g., packet loss and duplication. • Differentiability: Metric should provide insight into the nature of reordering, and perhaps even

into possible causes. It should capture both the amount and extent of reordering. • Usefulness: Rather than being a mere representation of the amount of reordering in a packet

stream, reorder metric must be useful to the application and/or resource management schemes. For example, it may allow one to determine the size of buffer that is required to recover from reordering.

• Evaluation complexity: The metric should be computable in real-time. In evaluating reordering in an arbitrarily long sequence, one should be able to keep a running measurement, without having to wait till all the packets have arrived. The memory requirement, i.e., the amount of state information, should not grow with the length of the sequence (N) and the computation time should be O (N).

• Robustness: Reorder measurement should be robust against different network phenomena and measurement peculiarities such as a very late arrival of a duplicate packet or a burst of losses.

76

Page 99: nasa / ohio space grant consortium 2005-2006 annual student

• Broader Applicability: A good metric would have more applicability as far as just describing the nature of reordering in a given sequence of packets. For example, a good metric may allow one to combine the characteristics of individual networks to predict the reorder behavior of the cascade of these networks. Regeneration of a sequence that follows the measure is also a very useful application.

The Internet Engineering Task Force (IETF) has presented a few metrics for reordering [8,11]. However, these metrics fail to meet many of the criteria mentioned above, especially those related to differentiability, usefulness and robustness. The reorder metric proposed measures the occupancy density of the reorder buffer, and as such will be referred to as Reorder Buffer-occupancy Density (RBD).

• Developed formula for describing packet reordering and definition & evaluation of Reorder Density (RD) at the receiver.

Consider a sequence of packets (1,2…N) transmitted over a network. A receive_index (1,2…) is assigned to each packet as it arrives at the destination. Lost and duplicate packets are not assigned a receive_index. First consider the case in which no losses or duplication of packets occur in the network. If the receive_index assigned to packet m is (m + md ), with md ≠ 0, we say that a reorder event has occurred. A packet is late if md > 0, and early if md < 0. Thus, packet reordering of a sequence of packets is completely represented by the union of reorder events.

Table 1. (a), (b) and (c) Examples of reordered sequences with corresponding R.

Arrived Sequence 1 2 3 5 4 6 8 7 Receive_index 1 2 3 4 5 6 7 8 Displacement 0 0 0 -1 1 0 -1 1

(a) No losses or duplicates

Arrived Sequence 1 2 5 3 6 7 8 9 Receive_index 1 2 3 5 6 7 8 9 Displacement 0 0 -2 2 0 0 0 0

(b) Packet 4 is lost

Arrived Sequence 1 2 6 4 3 5 3 3 Receive_index 1 2 3 4 5 6 - - Displacement 0 0 -3 0 2 1 - -

(c) Packet 3 is duplicated If there is no reordering in a packet sequence then R = φ. Conventionally, we represent R with non-decreasing order of m. Table 1(a)-(c) show the examples of the arrived sequence (sequence number), assigned receive_index and displacement, as well as the corresponding reorder sets for three cases: a) without losses or duplication, b) with loss, and c) with duplication of packets. Consider the case where packets may be lost or duplicated in transit. Assume that the loss of a packet can be detected at the receiver. We skip the receive_index corresponding to the sequence number of the lost packet, i.e., if packet ‘e’ is lost, then the receive_index = e is not assigned. In case of duplicates, we consider only the first copy of the packet at the receiver end, and discard the duplicate, i.e., the duplicate is not assigned a receive_index. These two cases are illustrated in Tables 1 (b) and (c) where the packet with sequence number 4 is lost and the packet with sequence number 3 is duplicated respectively. How to detect the loss of packets on the fly to skip the receive_index and deal with duplicate packets in a measurement environment, is addressed in subsection 3.4 below and [8] in detail. Reorder Density is defined as the discrete density of the frequency of packets with respect to their displacements, i.e., the lateness and earliness from the original position. Now the evaluation of RD at the

77

Page 100: nasa / ohio space grant consortium 2005-2006 annual student

receiver will be explained. Lost packets and duplicates are taken into account by skipping the receive_index corresponding to lost packets, and not assigning a duplicate packet a receive_index. However, this process needs detection of losses and duplicates, both of which provide implementation challenges. *When is a packet considered to be lost? A possibility is to consider it lost if it does not arrive when it is expected, but if it arrives later, go back and make appropriate corrections. At the other extreme, one can wait till the end of the received sequence to declare a packet as lost. However this requires keeping track of all received packets, as well as applying corrections to computations performed so far. Both these approaches are memory consuming, and also preclude real-time evaluation of the metrics. Maintaining a threshold DT and an early arrival buffer address this problem. If a packet is not received within DT packets from where it is expected, it is considered lost. A packet is classified as a duplicate on its arrival, if it already exists in early arrival buffer or current DT window or the packet number is less than current receive_index. With the use of threshold DT, since it is not known whether a packet is lost until DT packets are received, real-time RD evaluation may be done in one of two ways:

• Go-back DT: In this method, the rules are applied at each arrival. If a packet that was supposed to arrive DT places ago does not arrive, then this sequence number is removed from the receive_index, and RD is recomputed for the previous DT steps. Consider a received sequence (1,3,4,5,6,7,2) and DT = 3. As soon as 5 arrives, 2 is classified as lost and we go back and correct the previous DT receive_indices and displacements. When 2 actually arrives later, we do not assign receive_index to this arrival, i.e., consider it as lost and discard the packet. This method requires recording the previous DT packet numbers, and additional processing as we recomputed offsets when a packet is lost. However, if the amount of reordering is low, the overall computation is quicker than the next method.

• Stay-back DT: Here the computation lags by DT packets, i.e., the packet with receive_index i is not used in the evaluation until DT more packets have arrived after that. Thus, we do not correct or adjust any displacements. This method also requires buffering of the next DT arrivals.

A small DT value is used in above illustrations of the concept for convenience. It may be set much higher for practical measurements. A larger threshold value results in higher memory requirements for these implementations. However, the computation complexity in both cases is of the order N, where N is the size of the received sequence. Use of the threshold DT also improves the robustness of RD. For example, if a packet was late by a large number say 1000, and then the next 1000 packets would be shown as early in the absence of a threshold. By using a threshold, we can eliminate such large impacts on RD due to a single reordering event. Furthermore, it allows us to recover from conditions such as a large number of missing or duplicate packets. We have not completely described this aspect in the present paper. Perl scripts and algorithms are available in [14]. Results Figure 1 shows the observed RD's, which is also the reorder response of the network, from these sites. The RD provides a comprehensive measure of reordering and we can draw a number of inferences from these measures: (i) Net-1 have the smallest average delay, but due to larger deviations, the amount of reordering is comparatively high. Conversely, looking at the shape of RD, we can comment on the delay deviations in the network. The wider the RD spread, the higher the variance in delay. (ii) For Net-1 it is evident that the network deviates from the normal expectation, as large number of packets arrives reordered. Knowing the corresponding RD, we can tune this network by allocating a larger buffer size to recover from reordering in UDP application or increasing the number of duplicate ACKs to wait before fast retransmit with TCP [10, 12]. (iii) For Net-2, the application can recover from reordering by having a buffer size equal to 2 packets. Although Net-2 has approximately 2% more reordering than Net-3, applications using Net-2 will perform better due to lower displacements of reordered packets. (iv) In the case of Net-3, the RD has a discontinuity. One percent of the packets can be as late as 4 positions. It is possible that due to phenomenon like packet stripping in a network, these packets take an alternate path. Here, instead of using triple-ACK for fast retransmit; we could use 2-ACK, given the effect of 1% reordering on performance is acceptable or use 4-ACK to account of all reordered packets.

78

Page 101: nasa / ohio space grant consortium 2005-2006 annual student

-4 -3 -2 -1 0 1 2 3 40.00.1

0.60.81.0

RD

Earliness/Lateness

209.211.x.x (USA) - Net 1 62.94.x.x (Italy) - Net 2 130.195.x.x (New Zealand) - Net 3

Figure 1. RD based on measurements for Net-1, 2 and 3. (DT = 5)

Discussion The existing metrics to measure reordering are vague and insufficient to characterize packet reordering. We have presented a formal method for representing out-of-order sequences of packets, and defined the reorder density (RD) metric. RD characterizes and measures packet reordering comprehensively, is simple, and orthogonal to losses and duplicates. It captures both the amounts of packets affected and the magnitude of reordering, and can be used as the probability density function corresponding to the displacement of an arbitrary packet. The metric can be evaluated in real-time as the packets arrive at a node. A threshold DT limits the complexity of implementation, by considering a packet that is late by DT to be lost. The computation complexity of the algorithm is O(N), where N is the number of packets in the sequence. The memory requirement for the implementation is proportional to DT. The use of DT also makes it robust, allowing it to recover from cases such as very early or very late packets, and sequences of duplicates, or bursty losses. Further, the metric can be used to characterize the reordering introduced by a network, and under a fairly broad set of conditions, the reorder measurement of different subnets can be combined to predict the end-to-end reorder characteristics of a network. Currently, sequence regeneration algorithm is available for RD measures [2]. Reorder response of a network depends on factors such as the network load, background traffic, and the distribution of the inter-packet gap. At high sending rates, inter-packet gaps have negligible correlations, also validating convolution results [16]. Our present work includes measurements to understand packet reordering over the Internet in more detail. By keeping track of RD for an on-going connection, one can dynamically tune transport protocols to obtain superior performance. References 1. Banka, T., Bare, A. and Jayasumana, A., “Metrics for Degree of Reordering in Packet

Sequences,” Proc. of the IEEE 27th LCN, Nov. 2001, pp. 333-342. 2. Bare, A. A., “Measurement and Analysis of Packet Reordering,” Masters Thesis, Department of

Computer Science, Colorado State University, 2004. 3. Bellardo, J. and Savage, S., “Measuring Packet Reordering,” Proc. of Internet Measurements

Workshop (IMW’02), Nov. 2002, pp. 97-105. 4. Bennett, J. C. R., Partridge, C. and Shectman, N., “Packet Reordering is Not Pathological Network

Behavior,” Trans. on Networking IEEE/ACM, Dec. 1999, pp. 789-798. 5. Blanton, E. and Allman, M., “On Making TCP More Robust to Packet Reordering”, ACM Computer

Comm. Review, 32(1), Jan. 2002, pp. 20-30. 6. Bohacek, S., Hespanha, J., Lee, J., Lim, C. and Obraczka, K., “TCP-PR: TCP for Persistent Packet

Reordering,” Proc. of the IEEE 23rd ICDCS, May 2003, pp. 222-231. 7. Jaiswal, S., Iannaccone, G., Diot, C., Kurose, J. and Towsley, D., “Measurement and Classification

of Out-of-sequence Packets in Tier-1 IP Backbone,” Proc. of IEEE INFOCOM, Mar. 2003, pp. 1199-1209.

8. Jayasumana, A., Piratla, N. M., Bare, A. A., Banka, T., Whitner R., and McCollom, J., “Reorder Density Function - A Metric for Packet Reordering Measurement,” IETF draft (work in progress).

79

Page 102: nasa / ohio space grant consortium 2005-2006 annual student

9. Liu, H., “A Trace Driven Study of Packet Level Parallelism,” Proc. of International Conference on Communications (ICC’02), New York, NY, 2002, pp. 2191-2195.

10. Loguinov, D. and Radha, H., “End-to-End Internet Video Traffic Dynamics: Statistical Study and Analysis,” Proc. of IEEE INFOCOM, Jun. 2002, pp. 723-732.

11. Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, S., and Perser, J., “Packet Reordering Metric for IPPM,” IETF draft (work in progress).

12. Paxson, V., “Measurements and Analysis of End-to-End Internet Dynamics,” Ph.D. Dissertation, Computer Science Department, University of California, Berkeley, 1997.

13. Paxson, V., Almes, G., Mahdavi, J. and Mathis, M., “Framework for IP performance metrics,” RFC 2330.

14. Perl scripts for RD, http://www.cnrl.colostate.edu/Reorder_perl_scripts.html. Last modified on Nov. 3, 2004.

15. Piratla, N., “Metrics, Measurements and Techniques for the End-to-end Characterization of Networks (tentative),” Ph. D. Dissertation, Department of Electrical and Computer Engineering, Colorado State University, (work in progress).

16. Piratla, N. M., Jayasumana A. P., and Smith H., "Overcoming the Effects of Correlation in Delay Measurements using Inter-Packet Gaps," Proc. of IEEE International Conference on Networks (ICON), Singapore, Nov 2004, pp. 233-238.

17. Ruiz-Sanchez, M., Biersack, E. W. and Dabbous, W., “Survey and Taxonomy of IP Address Lookup Algorithms,” IEEE Network, Mar./Apr. 2001, pp. 8-23.

18. Xia, Y. and Tse, D., “Analysis on Packet Resequencing for Reliable Network Protocols,” Proc. of IEEE INFOCOM, San Francisco, CA, Mar. 2003, pp. 990-1000.

19. Zhang, M., Karp, B., Floyd, S., and Peterson, L., “RR-TCP: A Reordering-Robust TCP with DSACK,” Proc. The Eleventh IEEE International Conference on Networking Protocols (ICNP 2003), Atlanta, GA, Nov. 2003, pp. 95-106.

80

Page 103: nasa / ohio space grant consortium 2005-2006 annual student

Six Sigma and a Design of Experiments

Student Researcher: Brandon J. Ellis

Advisor: Dr. Mitch Wolff

Wright State University Mechanical Engineering Department

Abstract Six Sigma (6σ) is a process improvement methodology used around the world by several successful corporations such as General Electric, Boeing and Lockheed Martin. The objectives that drive 6σ are maximum profits and minimum defects. A critical tool of 6σ is design of experiments (DOE). When a corporation reaches 6σ (3.4 defects per million opportunities), the cost of quality drops to less than one percent of sales (Harry and Schroeder 2000). 6σ projects follow the DMAIC process. Define, Measure, Analyze, Improve and Control. The improve stage is where DOE solutions are proposed and implemented. This is where an engineer’s ability to perform designed experiments is valuable. Project Objectives The objective of this project is to develop a classroom laboratory that shows how a factorial design of experiments is set up and performed in the 6σ improve stage. This lab will test how three variables affect the volumetric efficiency (VE) of an internal combustion engine. The variables are engine speed, air/fuel mixture (A/F) and engine load. This laboratory also displays the validity of the mathematical calculations for VE. The objectives took two major efforts to achieve: • 80 hours of Six Sigma Green Belt Training through The Ohio State University over a 5 month period. • Work with senior design project team to create lab and perform runs on internal combustion engine

test stand at Wright State University over an extended period. The finalization of this lab is pending the Mech. Engineering department approval and may be brought to the classroom in the near future.

Methodology Used The VE of an internal combustion engine is the actual mass of air inducted by the engine divided by the theoretical mass of air the engine can induct. Things such as valve timing, intake/exhaust geometry, piston speed, load on engine and air/fuel ratio have an effect on the VE of the engine. In a factorial design of experiments, the number of experimental runs with n variables is determined by 2n. This lab uses three variables, so 23 = 8 experimental runs would have to be performed. These variables would be set to high and low values to see their interactions. Engine speed would have a high and low values of 2500 and 1500 RPM. The load on the engine was set to either 75 lbs. or 150 lbs. The air/fuel ratio was set to balanced or rich (14.64 and 14.41-14.47). The test team had problems getting an accurate reading of flow percentage (essentially VE) from the data acquisition system that was installed on the test engine. The flow meter was designed for a larger, more powerful engine and an accurate flow percentage was hard to obtain. However, the relative flow percentages are more valuable in this situation due to interactions between variables causing a change in VE. Once the runs were performed, the effects of each variable on VE are calculated. Next the interactions between each of two variables and then all three variables are calculated. A simplified form of a data collection table is shown below.

Notes: For A: – = low value A, + = high value A For B: – = low value B, + = high value B For C: – = low value C, + = high value C Interaction +/– calculated by multiplication of + or – across row. “Results” arbitrary percentages for example calculations.

Run # A B C A x B B x C A x C A x B x C Result1 - - - + + + - 552 - - + + - - + 773 - + - - - + + 474 - + + - + - - 735 + - - - + - + 566 + - + - - + - 807 + + - + - - - 518 + + + + + + + 73

Interactions between variablesExperimental data

81

Page 104: nasa / ohio space grant consortium 2005-2006 annual student

An example calculation of the effect of a variable can be shown as: Effect of A = [(Sum of Result with A set to “+” level) – (Sum of Result with A set to “–” level)]/4

For example, Effect of A = [(56+80+51+73) – (55+77+47+73)]/4 = 2 This means there is a 2% gain in the result when variable A is set from a low level to a high level. An example calculation of the interactions between variables can be shown as:

A x B interaction = [(Sum of Result from “+” rows in A x B column) – (Sum of Result from “–“ rows in A x B column)]/4 A x B x C inter. = [(Sum of Result from “+” rows in A x B x C column) – (Sum of Result from “–“ rows in A x B x C column)]/4

For example, A x B interaction = [(55+77+51+73) – (47+73+56+80)]/4 = 0 The interaction means the change in the result when the variables in the interaction are set to the same level (both + or both –). In this case, if variable A and B were both set to their high level or their low level, there is no change in the result. For the A x B x C interaction calculation, the change in the result would occur if all three variables are set to the same level. In this lab, a factorial designed experiment example will be performed to find the best combination of factors to achieve VE. This combination can be confirmed by the VE equations used in the textbook for the internal combustion class. Results Obtained The results are a display of what the data collection for a DOE VE lab would look like. Note that since the calibration of the sensor that measured VE (“FLOW %”) was off, an inaccurate reading of VE was taken. This, however, does not affect the interaction calculations that were performed. The table shows experimental values of VE.

Run A/F RPM Load A/F x RPM RPM x Load A/F x Load A/F x RPM x Load FLOW %1 14.64 1500 75 + + + - 14.92 14.64 1500 150 + - - + 23.53 14.64 2500 75 - - + + 16.54 14.64 2500 150 - + - - 23.75 14.41 1500 75 - + - + 14.16 14.47 1500 150 - - + - 22.77 14.41 2500 75 + - - - 15.78 14.47 2500 150 + + + + 22.4

A/F effect = -0.925. This means that if A/F is set from balanced to rich, there is a loss of 0.925% VE. RPM effect = +0.775. This means that if RPM is set from 1500 to 2500, there is a gain of 0.775% VE. Load effect = +7.775. This means that if Load is set from 75 lbs. to 150 lbs., there is a gain of 7.775% VE. A/F x RPM = -0.125. This means there is a loss of 0.125% VE when A/F and RPM are set to the same level (+/-). RPM x Load = -0.825. This means that there is a loss of 0.825% VE when RPM and Load are set to the same level. A/F x Load = -0.125. This means there is a loss of 0.125% VE when A/F and Load are set to the same level. A/FxRPMxLoad = -0.125. This means there is a loss of 0.125% VE if all three variables are set to the same level. The best combination in this case is the highest %VE (FLOW %), which is a balanced A/F ratio, high level of RPM (2500) and high load applied (150 lbs.). This combination yielded 23.7 % (uncalibrated) VE. Significance and Interpretation of Results These results show that the mathematical interpretation of VE is correct. The RPM effect makes sense because the piston speed is approaching the max torque RPM for the engine (>4000). The Load has the greatest effect on the VE due to the fact that the throttle must be opened further to maintain engine speed. This DOE lab shows students a good way to learn how to systematically test three or more variables in a variety of situations. As for the 6σ interpretation of these results, the best combination of these three variables has been found. One has to ask if further testing is needed, or is there sufficient data to find or correct a source of defects? In this case, the testing shows the physical effect each process has on VE. It also shows how processes interact with each other. DOE is a primary 6σ tool for mitigating design defects. It’s imperative to eliminate design defects prior to manufacturing. This lab depicts only a small portion of a potential six-month or longer 6σ project.

82

Page 105: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments and References 1. Ferguson, Colin R., and Kirkpatrick, Allan T. Internal Combustion Engines: Applied

Thermosciences. New York: John Wiley and Sons, Inc., 2nd Ed., 2001. 2. Harry, Mikel, Ph.D., and Richard Schroeder. Six Sigma: The Breakthrough Management Strategy

Revolutionizing the World’s Top Corporations. New York: Doubleday, 2000. 3. Keller, Paul. Six Sigma Demystified: A Self-Teaching Guide. Chicago: McGraw-Hill, 2005. 4. Pyzdek, Thomas. The Six Sigma Handbook. Chicago: McGraw-Hill, 2003. 5. Quality Council of Indiana. The Certified Six Sigma Black Belt Primer. W. Terre Haute: Quality

Council of Indiana, 2001.

83

Page 106: nasa / ohio space grant consortium 2005-2006 annual student

Computational Study of Engine Performance Using Computer Aided Simulation

Student Researcher: Ashlie B. Flegel

Advisor: Dr. Ray Hixon

The University of Toledo Mechanical Engineering Department

Abstract The objective to the Formula SAE (FSAE) competition is to conceive, design, fabricate, and compete with small formula-style racecar. There are restrictions placed on the car frame and engine. The cars are built with a team effort over a period of about one year and are taken to a worldwide competition for judging and comparison with approximately 140 other vehicles from colleges and universities throughout the world. The role of the powertrain development is to design and tune an engine airflow management system/engine assembly to provide consistent available power across the entire working range. This will aid in developing a competitive car from a power standpoint while improving drivability. Project Objectives There are several areas of improvement the Powertrain Development group would like to achieve this year. The main goal is to achieve the most low end power and a flat torque curve. In order to meet this goal, the Powertrain system needs extensive revision. The 2006 car’s powerplant is a 600cc Honda F4i engine. The air, by rule, is to be fed through a 20mm restrictor. The stock F4i normally feeds air through four 40mm throttle bodies. This greatly complicates the design of the Formula SAE car’s induction and exhaust system. The first objective of this project is to design an intake with optimum flow that will produce maximum torque using computer models to design the desired system. The second function of the project is proper tuning. The car has a programmable Performance Electronics fuel injection system. This allows special fuel maps to be used and create the desired power for each event that is run at competition. There are four different timed events the car competes in and for better performance it requires a special map allocating power in different RPM ranges. In the past being able to tune and make a special program for each event in the competition is a weak point. To achieve the desired goals of maximum torques and special maps the racecar made several runs on a chassis dyno which also incorporates the intake design. This not only provides low end power and torque but ideally will enhance engine performance. Methodology Used Several designs were examined as a possible application for the 2006 car. Areas of consideration when investigating designs were decreasing inefficiencies in cylinder overlap, smooth direct flow to each cylinder, high power curve, weight, and reliability. The first two styles similar in shape offered a larger volume yet choppy corners offering more turbulence. This style was also larger in size offering bulkiness to the car. The last style is centered on the Hemholtz Resonator Theory which uses the oscillations of the cylindrical intake chamber to create its own natural frequency which then induce pulses which creates systems of higher boost or more preferably torque (Heisler 1995). After investigating the computer models, the cylindrical intake was chosen and would be further proved when tuned on the dynameter. The extra feature added to this induction system is adjustable runner lengths to help gain more torque at the low end RPMs and offer more adjustability while tuning. Once the whole powertrain system was fabricated and the rest of the car was finished, tuning the car was imperative. The main objective of tuning is to tune the motor for best drive-ability by focusing on achieving a high, flat torque curve, optimize the intake runner length, and advancing the injector timing. The next goal of tuning is to make specialized maps to utilize the max power for each individual event run at the competition. One map will be geared towards a fuel economy endurance map while the other focuses on maximum acceleration.

84

Page 107: nasa / ohio space grant consortium 2005-2006 annual student

Performance electronics is the engine management system we utilize which has a data logging system installed internally. This allows us to watch real-time data on our laptops and make adjustments in the engine while the car is running. To make our adjustments more accurate a wide band O2 sensor is utilized on the car specifying our air fuel ratio at the given RPM. This allows us to get the right mixture achieving the stoicmetric efficiency and make use of the powerband the powertrain system offers. Once the car is tuned to stoic then pulls on the dyno are made which still currently being run. Results Obtained The figure below illustrates a pull on the dyno. This is where the driver holds the throttle at wide open throttle (red line) and lets the RPM (black line) climb up into the powerband. The purple line illustrates the air fuel ratio which helps verify how to change the fuel injection system. This is result in having a competitive car from a power standpoint while improving drivability.

Figure 1. The engine running at constant throttle. References 1. Heisler Heinz, (1995) “Advanced Engine Technology,” Burlington, MA: Butterworth-Heinemann.

85

Page 108: nasa / ohio space grant consortium 2005-2006 annual student

Numerical Simulation of a Low Pressure Turbine Blade Employing Active Flow Control

Student Researcher: Marshall C. Galbraith

Advisor: Dr. Kirti (Karmen) Ghia

University of Cincinnati Department of Aerospace Engineering and Engineering Mechanics

Abstract High altitude aircraft experience a large drop in the Reynolds number (Re) from take off conditions to cruise conditions. It has been shown in previous research performed by Simon and Volino that this reduction in Re number causes the flow inside the turbine cascades to become laminar, and separate more readily on the suction side of the turbine blade.1 This boundary-layer separation is undesirable and greatly reduces the efficiency of the turbine. To prevent this loss of efficiency, research will be pursued for active and passive means to delay and/or control the flow separation. Lake et al. used passive boundary layer trip, dimples, and V-grooves in an extensive study to reduce separation on the Pak-B turbine blade.2 Although these passive techniques were able to reduce the separation at fixed Re numbers, an active flow control method is needed for more efficient separation reduction over a range of Re numbers. Currently, researchers are investigating several different active flow control devices, including pulsating synthetic jets, vortex generator jets (VGJ), and moving protuberances. The proposed study intends to further investigate the mechanism of flow control via synthetic jets, which alternate between suction and blowing, on a low pressure turbine blade utilizing a Large Eddy Simulation (LES) Computational Fluid Dynamics (CFD) solver, and develop an approach to determine optimum values of the associated parameters such as jet angle, blowing ratio, frequency, duty cycle, etc., of the synthetic jets. However, before investigating of the effectiveness of synthetic jets, the CFD simulation was correlated with experimental data on VGJ. Nomenclature U, V, W = Non-dimensional velocity components in a Cartesian coordinate system P = Non-dimensionalized static pressure Vmax = Maximum velocity of the vortex generator jets Vr = Surface normal jet velocity component Vz = Spanwise jet velocity component Vθ = Streamwise surface tangent jet velocity component β = pitch angle γ = skew angle φ = surface normal angle at vortex generator jet locations Subscripts I, J, K = coordinate grid indices in the circumferential, surface normal, and spanwise directions JS = J = 1, the surface of the blade Project Objectives Flow separation is encountered in many engineering applications, and is generally detrimental. Therefore, extensive experimental and computational research has been carried out on separated flows to gain a deeper physical understanding of their process and possible ramifications. An example of detrimental flow separation occurs at low Re in a low-pressure turbine (LTP) cascade, where a significant drop in efficiency in an LPT is observed between take-off conditions and cruise conditions, due to the associated drop in Re. At lower Re values, the predominantly laminar boundary layer on the LPT blades is susceptible to flow separation, which occurs near the aft portion of suction-side of the LPT blades. The separated flow causes higher losses, lower stage efficiency, and higher fuel consumption. Thus, the suppression of separation on the LPT blades could significantly increase both performance and efficiency of the aircraft as a whole.

86

Page 109: nasa / ohio space grant consortium 2005-2006 annual student

Many experimental studies employing flow control strategies have been carried out in an effort to delay the separation process. Lake et al. employed passive boundary layer trip, dimples, and V-grooves which are to delay the boundary-layer separation.2 Loss coefficients were measured on modified and unmodified surfaces for a range of Re and turbulence intensities. With the use of dimples at the lowest Re the loss coefficient was successfully reduced by 58%. However, these techniques require a modification to the turbine blade geometry. Altering the blade geometry to avoid low-Re separation is not a desirable solution as it may affect the engine’s performance at high Re values. Therefore, an ideal flow separation control strategy would be activated at low-Re values yet dormant at high-Re values. Huang et al. studied phased plasma actuators as a possible active flow control device.3 The effects of Reynolds number and free-stream turbulence levels were analyzed on the onset of separation and reattachment. Separation location was observed as relatively insensitive to the experimental conditions. On the other hand, the reattachment location was highly sensitive to both turbulence intensity and Re. Furthermore, the performance obtained from plasma actuators was comparable to that obtained with the (VGJ). VGJs are mounted below the surface of the blade and can be located at a variety of positions and operated at various angles, blowing ratios, and frequencies. Furthermore, the amplitude can be pulsed blowing, pulsed suction, or alternate between the two. Experiments have shown a drastic reduction of separation at low Re values, while no significant adverse effects were observed when employed at higher (non-separating) Reynolds numbers.4 Separation is presumably presented by producing counter-rotating vortex pairs that energize the boundary layer through a momentum transfer with the freestream.5 Rizzetta et al. recently conducted a numerical study for flow separation control using VGJs on the PakB LPT blade with a 22% decrease in the total pressure wake loss coefficient.6 In addition, the VGJ shifted the separation location towards the trailing edge of the blade and the vertical extent of the separated region was reduced. The present study intends to validate the CFD simulation of VGJs with experimental data.4 Once grid and flow solver parameters which captures the proper flow features associated with the VGJ flow control have been determined, simulations utilizing synthetic jets which alternate between blowing and suction will be performed and their effectiveness will be compared with that of the VGJ.7 Methodology In order to capture the small scale structures the wake region of the LPT, FDL3DI, a higher-order accurate, parallel, Chimera, Large Eddy Simulation solver from Wright Patterson Air Force Base, was chosen. FDL3DI has been proven reliable for many steady and unsteady fluid flow problems.8,9,10,11,12 The FDL3DI code solves the unsteady, three-dimensional, compressible, unfiltered Navier-Stokes equations, with implicit approximate-factorization algorithm of Beam and Warming employing Newton-like subiterations.13 The efficiency of the implicit algorithm was increased by solving the factorized equations in diagonal form. In order to maintain temporal accuracy, which can be degraded by the diagonal form, three subiterations were utilized within a time step.14 FDL3DI is capable of up to tenth-order filtering and up to sixth-order compact differencing schemes with lower-order filtering as well as lower-order schemes employed at the boundary in order to maintain stability and accuracy on stretched curvilinear meshes. A fourth-order compact differencing scheme with a sixth-order filter was implemented in all simulations. All computational meshes were generated using automated software.15 Because FDL3DI performs all calculations in non-dimensional quantities, the distances in all directions were non-dimensionalized by the blade chord. The original mesh consisted of six grids, which utilize the overset capability with higher-order interpolation of FDL3DI as shown in Figure 1 a). A dense grid was patched on the surface of the LPT to provide grid points to resolve the fine scale flow features created by the VGJ as shown in Figure 2. The grids were constructed by creating an x and y plane which was extruded into the z direction. With a total number of 2.2x106 points in the mesh, the patch grid consisted of 81 planes with clustering around the VGJ locations, while the remaining grids consisted of 37 planes. The span of the mesh was 0.168 non-dimensional units, allowing for 3 VGJ holes. The mesh was decomposed into 30 blocks with utility codes developed by Sherer at al. and the Author as shown in Figure 1 b).16 The decomposition was constructed

87

Page 110: nasa / ohio space grant consortium 2005-2006 annual student

with an approximately equal number of grid points in each block in order to balance the computational work load between processors. On each face of the blocks, an overset, with a minimum of five points, was established with adjacent domains. Although this causes redundant computations, it maintains a formal higher-order accuracy of both the numerical differencing and filtering schemes. Figure 3 illustrates the boundary conditions employed. At the inlet, the velocity and density are prescribed and the pressure extrapolated, while at the outlet the static pressure is prescribed and remaining variables extrapolated. Periodic boundary conditions are employed in the spanwise direction as well as perpendicular to the spanwise direction. A stretched mesh is incorporated at the inlet and outlet in order to prevent reflections from the boundaries. The wall is a no-slip adiabatic wall with a fourth order accurate approximation to the pressure Neumann boundary condition. In the present simulation, the VGJ holes are approximated using a square shaped geometry. The width of the hole in the spanwise direction is 3 grid points and width in circumferential direction is 4 grid points. The VGJ hole locations and size are shown in Figure 4. Pitch angle is defined as the angle that a VGJ makes with its projection on the local surface, and the skew angle is the angle which projection of the jet makes with the local freestream direction. The pitch and skew angles of the VGJ are 30° and 90° respectively. The VGJs are positioned at 63% of the chord. Velocity components of the VGJ in the normal, streamwise tangential, and spanwise directions are defined in Eqs. 1 through 3.

βsinVV maxr = (1)

γθ coscosVV max β= (2)

γsincosVV max βz = (3)

These velocity components are then converted to Cartesian velocity components through Eqs. 4 though 6.

φφ cosVsinVU rθ −= (4)

φφ sinVcosVV rθ −= (5)

zVW = (6)

The VGJ holes are assumed isothermal with the pressure calculated by solving the inviscid normal-momentum equation in Eq. 7.

rrr

p rr ∂∂

−=∂∂ VVV2

θ (7)

The derivatives rVr ∂∂ and rp ∂∂ are approximated with a fourth-order one-sided expression as shown in Eqs. 8 and 9.

K 4,JS I,rK 3,JS I,rK 2,JS I, rK 1,JS I,rK JS, I, rr V

41 V

34V3V4V

1225

rV

++++ −+−+−=∂∂ (8)

K 4,JS I,K 3,JS I,K 2,JS I,K 1,JS I,K JS, I, P41P

34P3P4P

1225

rp

++++ −+−+−=∂∂ (9)

Combining Eq. 7 and 9, and recognizing that for a skew angle of 90°, Vθ= 0, the pressure at the VGJ is calculated from Eq. 10.

88

Page 111: nasa / ohio space grant consortium 2005-2006 annual student

⎟⎠⎞

⎜⎝⎛

∂∂

+−+−= ++++ rVVP

41P

34P3P4

2512P r

rK 4,JS I,K 3,JS I,K 2,JS I,K 1,JS I,K JS, I, (10)

In the present simulation, a uniform jet velocity is provided across the hole with a constant blowing ratio of 2.0. Results Obtained An iso-surface of U component of velocity serves to illustrate the difference between a second order spatially accurate and 4th order spatially accurate simulation in Figure 5. Importance of the higher order accuracy is observed particularly in the wake on the suctions surface of the LPT. Second order accuracy has a tendency to over predict the coherency of the vortex structures in the wake of the LPT, whereas the higher order method resolves the small scale structures. Experimentally measure read Cp values are compared with competed vales in Fig. 6. Despite the over shift in Cp vales, the baseline case captures the general shape of the pressure distribution on the pressure surface as well as the leading 60% of the suction surface. However, the steep gradient aft of the 60% chord indicates the competed separation regain is smaller than what was measured experimentally. In fact, besides the shift in vales, the Cp distribution with the flow control active deviates little from the baseline case. Fluctuations observed in the Cp values on the suction side of the LPT have been attributed to discontinuities in the grid on the surface of the LPT. These will be remedied through smoothing of the grid. Contours of instantaneous spanwise component of vorticity at half the span for the baseline and controlled case are shown in Figure 7, while contours of spanwise and time averaged spanwise component of vorticity for both cases are shown in Figure 8. Note that the lines in the vorticity plots which resemble the contours of the grid are a result of post-processing calculations. As indicated by the pressure coefficient values, only slight differences between the cases are observed. The instantaneous shows more structures in the separation region than the controlled case, while the time averaged contours are nearly identical. A slight protuberance in the vorticity contour on the sections surface is observed in the interpolated region of the patch grid. Unfortunately, the drastic difference in spanwise grid points in the patch grid and the remaining grids appears to be casing artificial flow structures which prevent the flow from separating in the wake in the baseline case. Therefore, no practical conclusions can be drawn from these calculations, other than that improvement to grid and boundary conditions are required in order to remedy these discrepancies. Future work requires further grid refinement, in particular on the section surface and wake region. In addition, the spanwise direction requires refinement. Boundary conditions also require revision. Rather than simply prescribing velocity, pressure, and density at the inlet, an inlet boundary condition employed by Rizetta et. al. which maintains total pressure and temperature is expected to correct the shift in pressure coefficient values. Figures

(a)

(b) Figure 1. a) Original computational mesh b) Decomposed mesh

89

Page 112: nasa / ohio space grant consortium 2005-2006 annual student

Figure 2. Vortex generator jet patch grid.

a) 2nd order spatial accuracy b) 4th order spatial accuracy Figure 5. Comparison of spatial accuracy.

Figure 4. Vortex Generator Jet.

VGJ Holes63%

Figure 3. Boundary Conditions.

90

Page 113: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments Computational resources were supported in part by a grant from the U. S. Department of Defense Major Shared Resource Center at Wright-Patterson AFB. The author wishes to thank Dr. Ghia for his never-ending support and encouragement. The author is also grateful to Dr. D. Rizetta, Dr. S. Sherer, Dr. P. Morgan, and others at the Computational Sciences Center of Excellence of the Air Vehicles Directorate at the Air Force Research Laboratory for their support.

Figure 6. Experimental, Controlled, and Baseline Coefficients of Pressure.

a) Baseline b) Controlled Figure 8. Instantaneous contours of Z component of vortisity.

a) Baseline b) Controlled Figure 7. Instantaneous contours of Z component of vortisity.

91

Page 114: nasa / ohio space grant consortium 2005-2006 annual student

References 1. Simon, T.W. and Volino, R. J., "Separating and Separated Boundary Layers", Technical Report WL-

TR-96-2092, Wright Laboratory. 1996. 2. Lake, J. P, King, P. I., Rivir, R. B., “Low Reynolds Number Loss Reduction on Turbine Blades with

Dimples and V-Grooves,” AIAA Paper 00-0738, Jan. 2000. 3. Huang, J., Corke, T. C., and Thomas, F. O., (2003), “Plasma Actuators for Separation Control of Low

Pressure Turbine Blades,” AIAA Paper 2003-1027. 4. Bons, J. P., Sondergaard, R., and Rivir, R. B., (2001) “Turbine Separation Control Using Pulsed

Vortex generator jets,” Journal of Turbomachinery, vol. 123, No. 2, pp. 198-206. 5. Johnston, J.P., and Nishi, M., “Vortex Generator Jets-Means for Flow Separation Control” AIAA

Journal 28, 429-436 (1990). 6. Rizzetta, D. P., and Visbal M. R., (2003), "Numerical Investigation of Transitional Flow Through a

Low-Pressure Turbine Cascade," AIAA Paper 2003-3587. 7. Galbraith, M. C., “Numerical Simulations of a High-Lift Airfoil Employing Active Flow Control,”

AIAA Paper 2006-147, Jan. 2006. 8. Gordnier, R.E. and Visbal, M.R., "Numerical Simulation of Delta-Wing Roll." AIAA Paper 93-0554,

Jan. 1993. 9. Visbal, M. R., "Computational Study of Vortex Breakdown on a Pitching Delta Wing," AIAA Paper

93-2974, Jul. 1993. 10. Visbal, M. R., Gaitonde, D., and Gogineni, S., "Direct Numerical Simulation of a Forced Transitional

Plane Wall Jet," AIAA Paper 98-2643, Jun. 1998. 11. Rizetta, D. P., Visbal, M.R., and Blaisdell, G.A., "A Time-Implicit High Order Compact Differencing

and Filtering Scheme for Large-Eddy Simulation," International Journal for Numerical Methods in Fluids, Vol. 42, No. 6, Jun. 2003, pp. 665-693.

12. Rizetta, D.P. and Visbal, M.R., "Numerical Investigation of Transitional Flow Through a Low-Pressure Turbine Cascade," AIAA Paper 2003-3587, Jun. 2003.

13. Beam, R. and Warming, R., "An Implicit Factored Scheme for the Compressible Navier-Stokes Equations," AIAA Journal, Vol. 16, No. 4, Apr. 1978, pp. 393-402.

14. Pulliam, T.H. and Chaussee, D.S., "A Diagonal Form of an Implicit Approximate-Factorization Algorithm," Journal of Computational Physics, Vol. 39, No. 2, Feb. 1981, pp. 347-363.

15. Steinbrenner, J. P., Chawner J. P., and Fouts, C. L., “The GRIDGEN 3D Multiple Block Grid Generation System, Volume II: User’s Manual,” Technical Report WRDC-TR-90-3022, Wright Research and Development Center, Wright Patterson AFB, OH, Feb. 1991.

16. Sherer, E. S., Visbal, M. R, and Galbraith, M. C. “Automated Preprocessing Tools for Use with a High-Order Overset-Grid Algorithm”, AIAA Paper 2006-1147, Jan. 2006.

92

Page 115: nasa / ohio space grant consortium 2005-2006 annual student

High-Pressure Liquid Chromatography Analysis of a Vapor/Mist Phase Lubricant

Student Researcher: Maria J. Gatica

Advisor: Wilfredo Morales

Carnegie Mellon University Department of Biomedical Engineering

Abstract Vapor/mist phase lubrication is a novel lubrication method for high temperature aerospace applications. The advantage of this technology is that it requires small amounts of the lubricating agent. Nearly all past work investigating this novel lubrication approach has been focused on the use of phosphate esters as the lubricating agent. Phosphate esters, however, have proven to provide an effective lubricant environment only under certain circumstances, and tend to be inadequate for long-term use. A new vapor/mist phase lubricant, a C-ether, has been successfully tested using a high-load spur gearbox rig. A high pressure liquid chromatography system was used in a comparative study to assess degradation of C-ether lubricants. Comparison between a control sample (unused C-ether) and C-ether collected from the gear-box (30-hour) tests indicated that no detectable degradation was present in the lubricant. These results suggest that this lubricant can be re-used and that longer tests are needed to characterize lubricant degradation behavior. Objectives and Results Vapor/Mist Phase Lubricant (VMPL) is a novel lubrication technology using small quantities of an organic liquid. This liquid is vaporized, or atomized in the form of a mist, into a high velocity carrier stream. This stream is then directed towards an environment where interacting parts require lubrication for wear and friction control purposes. The lubricating stream undergoes a chemical reaction upon impinging with the solid surfaces (typically at high temperature) generating a lubricious film. The film can alter the friction properties or, more often, act as a “sacrificial” film that wears away while a new coating /layer of film is formed. Nearly all previous work by NASA and USAF has been focused on the analysis of phosphate esters, an organo-phosphorous material that has proven an effective VMPL lubricant. These esters have been used to adequately lubricate bearings and gears. Over time, however, phosphates seem to react extensively or lose its ability to react with the interacting surfaces, and excessive wear is observed. C-ether was recently identified as a new lubricating agent to be suitable for VMPL. This ether was tested in a high-load gear box operating at 10,000 rpm. Under these conditions C-ether used in a VMPL mode provided excellent lubrication with no detectable wear of the gear teeth for a 30-hour continuous test. In this research, we investigated a characterization technique to assess lubricant degradation. Although small volumes of lubricant were required, the lubricant is expensive and, in order to be a competitive alternative for high-temperature lubrication, long-term use needs to be ensured. Therefore, we needed to know the extent of lubricant degradation and/or if the lubricant could be re-used after a 30-hour test. High-pressure Liquid Chromatography (HPLC) was the analysis method used to assess lubricant degradation. HPLC is a method of separating different molecules as they are pumped under high pressure through a series of separation columns. There are several classes of HPLC separation but, for this work, size exclusion chromatography was used. In size exclusion chromatography, the molecules are separated by their molecular size which is a function of their molecular weight. The larger molecules are separated from the smaller molecules, elute from the separation column first, and are detected and quantified.

93

Page 116: nasa / ohio space grant consortium 2005-2006 annual student

The analysis consisted in a comparative study in which HPLC spectra for lubricants used in VMPL mode were compared against that of a control sample (unused C-ether). These comparisons revealed that the new lubricant was mainly composed of three components. Spectra for lubricants collected from the gear spurbox revealed the same picks and identical composition, which lead to believe that the lubricant was not significantly degraded after 30-hours tests, and thus could be reused. Longer-term tests with new (unused lubricant) and recycled lubricants are to be run to characterize the reliability of C-ether for long-term lubrication applications. Acknowledgments I would first like to thank NASA Glenn Research Center and the Ohio Aerospace Institute for their resources. I would also like to thank Mary Roberts from OAI and Darla Kimbro from the NASA SHARP Program for providing me the unique opportunity of participating in the NASA SHARP and LERCIP Programs. Through these programs, I was able to explore several Engineering and Science disciplines and experienced the excitement of working in research at one of the nation’s leading research facilities. Secondly, I would like to thank my mentor, Dr. Wilfredo Morales, for his wisdom and guidance throughout my internship. Also, for providing a mere student like me the opportunity to participate in such an interesting research project.

94

Page 117: nasa / ohio space grant consortium 2005-2006 annual student

Improving Nutrient Absorption in Zero Gravity

Student Researcher: Eric L. Hehl

Advisor: Tekla Madaras

Owens Community College Dietetic Technology

Abstract One of mankind’s most perilous hardships upon attempting extended space exploration is the physiological breakdown of one’s own body. The natural degradation of the human body’s muscle tissue, bone, and red blood cell count is significant enough to deter any prolonged journey of any kind. The ability to halt the loss of, and to, possibly, maintain the level of muscle and bone mass, and to also preserve the red blood cell count within the human body would greatly improve the ability of our research missions in space to complete long term research. The lack of gravity and stress placed on the astronauts bodies has the effect of extreme erosion on the musculoskeletal system. Degeneration is noted in the skeletal system at a rate of approximately 6-24% per year (Bruce). This loss occurs primarily in the long bones of the legs and in the cervical vertebrae (Orenstein) “Vibration Therapy”, which is as barely perceptible vibrations delivered into the musculoskeletal system, has shown to not only maintain bone density, but to improve it in some studies (Brown). These miniscule vibrations can have the same effect that intense voluntary exercise and/or weight bearing activity can have. These vibrations are thought to initiate the bodies stress response which will increase nutrient uptake (particularly in this case calcium and vitamin D among others) which will improve bone density. This is especially important when considering the lack of ultra-violet activation of vitamin D precursors. The lack of these pre-cursors, which would slow, if not practically halt any osteocyte synthesis (Phillips). Implementation of this technology can be carried out in several ways. The most prominent of which would be positioning of the unit within or in contact with the astronauts’ sleeping chambers. This will allow the use of the unit while the astronaut is carrying out his or her sleep schedule which will also increase efficiency due to the astronauts ability to perform other actions other than “vibration therapy” while awake. This technology, in conjunction with all other current nutritional precautions, could allow for prolonged research missions in deep space furthering mankind’s exploration. Other ways to execute this “osteomassage’ would be to have the astronaut stand on a “vibration plate” which would then begin these vibrations, stimulating the skeletal system. This could occur several times throughout the day. Objectives / Results Several tests have been conducted in the interest of this “vibration therapy”. In one test, scientists suspended rat’s hind ends over a period of 28 days in order to simulate zero gravity. Some of the rats were allowed weight bearing activity for ten minutes per day, while a second group was exposed to mechanical vibrations for ten minutes per day. A third group was not allowed weight bearing activity of any kind and was not exposed to any vibrations. The results showed that the rats who were not allowed ten minutes of weight bearing exercise per day, but were exposed to the vibrations, maintained a higher percentage of bone mass than those who were allowed ten minutes per day of weight bearing activity per day. Those who were not exposed to any exterior forces lost the most bone mass as suspected (Bruce). A similar study involved the use of sheep. Sheep in the test group were exposed to low frequency vibrations in the long bones of their posterior limbs. Upon completion of their daily therapy, they were returned to the control group which consisted of sheep who had not received any additional therapy. Over the course of one year the study showed that the bone density of the test group of sheep was indeed more dense than that of the control group (Bruce).

95

Page 118: nasa / ohio space grant consortium 2005-2006 annual student

Currently there has not been a great deal of testing of this “vibration therapy” on human beings, but when one considers the effect of these two studies, one cannot help but be optimistic. The application of this technology reaches far beyond just the space program. Current health trends in the United States exhibit many who would benefit from such therapy. Perhaps this “vibration therapy”, a technology that would allow us to finally travel to Mars, would also be a stepping stone here on Earth towards a happier and healthier tomorrow. Acknowledgments The author would like to thank NASA Glenn Research Center for the scholarship opportunity and also for providing an outlet for presentation of this research. The author would also like to thank Tekla Madaras for her guidance and expertise regarding nutrition and dietetic technology. This report has reminded the author to not only reach for new horizons, but to stretch beyond the stratosphere. Works Cited / References Brown, Dwayne and Angeline Judex. “Good Vibrations” May Prevent Bone Loss In Space

Site Updated April 19th, 2006 (Updated Daily). Retrieved March 14th, 2006. http://www.sciencedaily.com/releases/2001/10/011003065112.htm

Bruce III, Robert Douglas. “The Problem of Bone Loss During Space Flight and the Need For More

Effective Treatments To Make a Missions to Mars Safer” Site Updated May 28, 2002. Retrieved March 14th, 2006. http://www.dartmouth.edu/~humbio01/s_papers/2002/Bruce.pdf Orenstein, Beth. Lost In Space: Bone Mass Site Updated August 2nd, 2004. Retrieved April 1st, 2006 http://www.radiologytoday.net/archive/rt_080204p10.shtml Phillips DVM, Ph.D., Robert. “Nutrition in Space” Power Point Presentation Retrieved March 15th, 2006. http://www1.dfrc.nasa.gov/Education///Educator/Workshops/2001/PDF/spacefood.pdf Siff, Mel C. PhD. “Macrocurrent and Microcurrent Electrostimulation in Sport” Site Updated July 2000. Retrieved March, 26th, 2006 http://www.sportscience.org/SPORTSCI/JANUARY/macrocurrent_and_microcurrent_el.htm

96

Page 119: nasa / ohio space grant consortium 2005-2006 annual student

Calcium Stores in Tetrahymena Thermophila

Student Researcher: Stacey A. Henness

Advisor: Dr. Heather Kuruvilla

Cedarville University Science and Mathematics Department

Abstract Calcium is a ubiquitous signaling molecule within cells. In most mammalian cells, internal stores of calcium mainly reside in the endoplasmic reticulum. It is thought that in protozoans, many of the calcium stores may be in sacs which lie directly under the plasma membrane, rather than in the endoplasmic reticulum (ER). Thapsigargin is an inhibitor of the ER calcium ATPase, which blocks calcium signaling in Tetrahymena. In order to determine where the calcium stores were located in the protozoan, Tetrahymena thermophila, I double-labeled fixed specimens with fluorescent BODIPY™ thapsigargin and BODIPY™ ER Tracker™. I found that the two dyes co-localized, suggesting that thapsigargin is working by binding to the ER calcium ATPase, rather than through some unknown drug mechanism or side effect. As a negative control, I double-labeled fixed specimens with BODIPY™ MitoTracker™ and BODIPY™ thapsigargin. These two dyes localized to different areas of the cell. It was concluded that the ER is not present in localized organelles as was originally assumed. Instead, it is dispersed throughout the intracellular space, and calcium is stored in the ER of Tetrahymena thermophila. Now that I have this data in Tetrahymena, I hope to do similar studies in Paramecium for the sake of comparison. To lay the groundwork for such a comparison, I am currently conducting studies of chemorepellent mechanisms in Paramecium, in order to determine whether calcium is involved. Project Objectives In this project the objective was to see where calcium stores are located in the protozoan, Tetrahymena thermophila. GTP utilizes a tyrosine kinase pathway to signal avoidance. Calcium is involved in ciliary reversal, but not required for kinase signaling. It is known that calcium is stored within the cells of Tetrahymena, but the location of these storehouses remains unknown. The hypothesis was that Tetrahymena stores its calcium in the endoplasmic reticulum of the cell. Different fluorescent dyes are used in conjunction with a fluorescent microscope in order to determine where calcium is located. ER tracker is one such dye which binds to the endoplasmic reticulum of the cell, regardless of the presence of calcium. Both BODIPY™ thapsigargin and BODIPY™ ryanodine bind in the presence of calcium. Thapsigargin is an inhibitor of the ER calcium ATPase in many cells and an inhibitor of the GTP chemoresponse in Tetrahymena. Ryanodine specifically binds to and modulates Ca2+ release channels that regulate intracellular Ca2+ levels. Mitotracker™ selectively binds to the mitochondria in the cell. Therefore, this was used as a negative control against any nonspecific binding obtained during staining. Methodology In the beginning of my research, the literature for the various fluorescence dyes was read for sample procedures, and previous literature about Tetrahymena calcium pathways was noted. After finding the recommended dye concentrations, three live Tetrahymena cultures were labeled with ryanodine, ER-tracker, and thapsigargin. These cultures merely served as evidence that labeling in the Tetrahymena could occur. Since the cells were constantly moving around, a concentration of 10% formaldehyde/90% buffer was used to kill and fix the cells. This allowed photography with a fluorescence scope to identify the areas to which the dyes were labeling.

97

Page 120: nasa / ohio space grant consortium 2005-2006 annual student

In the first few weeks of research, care was taken to determine the correct concentrations of dyes that were to be used. Excess use of dye resulted in non-specific binding – while an insufficient concentration of dye resulted in dim, blurry images when photographed by the scope. Stock solutions were prepared in DMSO4 at concentrations recommended by Invitrogens. The concentrations for the thapsigargin, ryanodine, and ER tracker were 1 µg/µl. Next, double labeling (thapsigargin with ER tracker and ryanodine with ER tracker) of Tetrahymena was implemented to determine whether calcium stores resided in the ER. Since the dyes were competitively labeling the cell, the dye concentrations from the single labeling were useless, but served as a starting point. In addition, it was conducive to starve the cells in order to shrink the food vacuoles. This eliminated non-specific binding. Final concentrations for double labeling are listed in the results section of this report. About halfway through the semester, Mitotracker was included to the repertoire of dyes to serve as a control to indicate that the other dyes were indeed labeling ER and calcium stores. The concentrations for those solutions are also listed in the results section of this report. Results and Discussion Single labeling with ER tracker showed a generalized binding dispersed throughout the cell, contrary to the assumption that the endoplasmic reticulum in Tetrahymena is found as a localized organelle. The mitotracker labeling was used as a negative control against ER tracker, since it initially appeared that ER tracker was binding non-specifically. Comparison of the cells labeled with mitotracker to the cells labeled with ER tracker confirmed that ER tracker was, in fact, binding to highly dispersed ER throughout the cell, and not merely non-specifically binding to all areas. After double labeling with thapsagargin and ER tracker, it was seen that both bound to the same area of the cell, confirming the presupposition that calcium is stored in the ER. Double labeling with ryanodine and ER tracker also confirmed these results. Double labeling with thapsagargin and mitotracker showed that calcium bound to thapsigargin, and consequently to the ER, but does not bind to the mitochondria of the cell. Therefore, it was concluded that the ER is not present in a central organelle as was originally assumed. Instead, it is dispersed throughout the intracellular space, similar to secretory vesicles, and calcium was found to be localized in the ER of the cell as was initially hypothesized.

Further research for calcium signaling in Tetrahymena remains to be explored, as well as calcium signaling in other protozoans. In fact, I am presently conducting studies of chemorepellent mechanisms in Paramecium, in order to determine whether calcium is involved. The following graph is my new paramecium data vs. my old Tetrahymena data. In order to finish up my comparisons, I need to do cross-adaptation studies, and length of adaptation studies for the various chemicals involved in cell signaling. Charts and Figures Concentrations for double labeling procedures:

Thapsigargin. With ER: 1:100000 ER tracker: 1:998 Thapsigargin: 1:998 ER tracker with Ryanodine: 1:100000 ER tracker: 3:947 Ryanodine: 50:947

98

Page 121: nasa / ohio space grant consortium 2005-2006 annual student

Concentrations for Mitotracker control group: Paramecium Avoidance Data:

0

10

20

30

40

50

60

70

80

90

100

0.001 0.01 0.1 1 10 100

[Polycation], µM

Cel

ls S

how

ing

Avo

idan

ce, [

%]

VIPPACAP-38PACAP 6-38PACAP 1-27Lysozyme

Note: that lysozyme and VIP are similar ligands in Tetrahymena, and have wildly disparate characteristics in Paramecium. Lysozyme is the worst repellent in Tetrahymena and the best repellent in Paramecium. **Special thanks to Dr. Kuruvilla for all of her help with this research. References 1. Bartholomew, J., Abraham, H., Black, A., Hamilton, T., Reichart, J., Mundy, R., Recktenwal,

Kuruvilla, H. 2005. GTP signaling in Tetrahymena thermophila involves a tyrosine kinase pathway coupled to NO and cGMP. Acta Protozoologica, in press.

2. Hennessey, T. M., Kim, D. Y., Oberski, D. J., Hard, R., Rankin, S. A., Pennock, D. G. 2002. Inner arm dynein 1 is essential for Ca++- dependent ciliary reversal in Tetrahymena thermophila. Cell Motil. Cytoskel., 53: 281-288.

3. Kim, M. Y., Kuruvilla, H. G., Raghu, S., Hennessey, T. M. 1999. ATP reception and chemosensory adaptation in Tetrahymena.

Mito-tracker with Thapsigargin Thapsigargin: 1:994 Mito-tracker: 5:994 Diluted Mito-tracker with Thapsigargin Thapsigargin: 10:989 Mito-tracker: 1:989

99

Page 122: nasa / ohio space grant consortium 2005-2006 annual student

Cone Penetrometer Equipped with Piezoelectric Sensors for Characterization of Lunar and Martian Soils

Student Researcher: Heather Ann Hlasko

Advisor: Dr. Xiangwu (David) Zeng

Case Western Reserve University Department of Civil Engineering

Abstract The mechanical properties of Lunar and Martian subsurface material are important parameters in landing on the Moon and Mars, respectively. Any future plans of going back to the Moon and on to Mars rests on one thing: the condition of the soils. The space shuttles will land on it. The astronauts’ homes may be dug in it. Space stations may be constructed on it. Currently, there is not enough information about these soils especially for the planning of excavation and construction sites on the Lunar and Martian surfaces. Lacking this type of data can lead to ineffective surface operations, unstable constructions, and catastrophic collapses. A new field testing device has been developed in the geotechnical laboratory of Case Western Reserve University (Case) to measure the mechanical properties of granular soils. These properties include stiffness (elastic modulus, shear modulus, and constrained modulus) and Poisson’s ratio. The device consists of a pair of cone penetrometers, each fitted with two piezoelectric sensors, which can easily be pushed into foundation soils. The following paper proposes the idea and methodology behind the use of this type of device for soils testing on the Moon and Mars. The data acquired from the current device is presented along with suggestions and improvements in the development of a regolith penetrometer. Project Objectives The current device developed at Case consists of a pair of cone penetrometers, each fitted with two piezoelectric sensors, which can easily be pushed into foundation soils. One set of the sensors are used as wave transmitters while the other set as wave receivers. An electrical pulse produced by a function generator is used to activate the transmitters. Vibration of the transmitters produces primary and shear waves that propagate through the soil and are captured by the receivers. From the measured velocities of shear and primary waves, soil stiffness and Poisson’s ratio can be determined. The technique has been proven to produce reliable results in the laboratory. A number of papers have been published on this technique, Zeng et al. (2003a, 2003b, 2003c, and 2004). In the proposed project we will develop this technique into a low cost, automated, and mobile unit that can penetrate the Lunar and Martian subsurface and can quickly determine its mechanical properties. This device will employ sensors to measure penetration resistance and skin friction as well as piezoelectric sensors to determine material stiffness at different depths by generating and measuring acoustic waves. The system will be lightweight, mobile, versatile, user friendly, and applicable to all types of soils and field conditions. It is anticipated that when the system is fully completed, a low cost, mobile and automated system for rapidly measuring the mechanical properties of the soils on the Moon and Mars will be developed. The development of this device will make significant contributions to the space program. Methodology Used The experimental technique used in the current design to measure elastic modulus, shear modulus, constrained modulus, and Poisson’s ratio of soils takes advantage of piezoelectric sensors developed in recent years. These sensors are made of piezoelectric ceramic materials, in which as an electrical excitation is applied to a transmitter element, it leads to a mechanical vibration, which generates shear (s) waves for bender elements and primary (p) waves for extender elements in a soil. Similarly for a wave receiver, a mechanical vibration of the element induced by the waves leads to an electrical output.

100

Page 123: nasa / ohio space grant consortium 2005-2006 annual student

Therefore, the velocity of an s- and p- wave can be determined by measuring its travel time and the distance between the wave transmitter and receiver. Since the maximum strain generated by a piezoelectric sensor in the surrounding soil is on the order of 10-3 % as reported by Dyvik and Madshus (1985), the stress-strain relationship is within the elastic range of soils. This technique has been used by a number of researchers such as Dyvik and Madshus (1985), Thomann and Hryciw (1990), Jovicic et al. (1996), Viggiani and Atkinson (1995), Hryciw and Thomann (1993), Jovicic and Cool (1998), and Zeng and Ni (1998, 1999) to measure the stiffness of sands and clays in the laboratory in recent years. Figure 1 shows the current piezo-cone penetrometer. A 33120A Agilent Waveform Generator is used to produce the triggering source signals. A square impulse wave is used as the source signal as opposed to a regular sine wave. Square waves have been known to produce clearer and more reliable results. A 54624A Agilent Oscilloscope is used to capture and display the received signals and a 467A Hewlett Packard Power Amplifier is used to increase the intensity of the images captured by the oscilloscope to aid in a clear reception of the received signals. The travel time of the s- and p-waves from the tip of the transmitter to the tip of the receiver are then determined. The velocities of the s- and p-waves can be calculated according to the travel time recorded and the distance between the tips of a transmitter and receiver. The piezo-cone penetrometer consists of one set of bender elements, one set of extender elements, two rectangular push rods, two solid removable cones, and one connection/extension rod. The push rods themselves each consist of two pieces of ¼” x 2” x 12” flat stock aluminum fixed with 12 screws. Two rectangular areas for the bender and extender elements were milled in the steel as well as an area to place and protect the lead wires. The cone tips are also fabricated from stainless steel and are removable as they simply screw into the base of the rods. The penetrometers themselves are connected to each other with a horizontally adjustable connection/extension rod. The rod is held in place by a set screw which can be adjusted to provide differing lengths between the elements for optimum performance. The elements themselves can also be horizontally adjusted for maximum performance or maximum protection. The cone penetrometers can be pushed into the base material either by hand or by using an ultrasonic vibrator which uses high frequency vibration to drive the penetrometers into the ground smoothly without creating a significant disturbance to the soil. Geologists have used similar techniques to drive penetrometers into rock efficiently without any damage to the sensors on the penetrometers. The entire system (including the computer and batteries) weighs less than 15 pounds. It can be run on a car battery or several 9 Volt batteries connected in series. The system is mobile and simple enough that one technician with some initial training can carry out the test and data interpretation. Calculations Used to Determine Soil Properties: Supposing that the distance between an s-wave transmitter and an s-wave receiver is Ls and the time for the wave to travel this distance is ts, the average shear wave velocity is,

Vs = Ls / ts

Similarly for P waves we have,

Vp = Lp / tp

Where Lp is the distance between the extender element transmitter and receiver and tp is the travel time of the p-wave. The shear modulus of the soil would be,

Gmax = ρ * Vs2

101

Page 124: nasa / ohio space grant consortium 2005-2006 annual student

In which ρ is the mass density of the soil. In the field, ρ can be determined by a nuclear density apparatus or by estimation. The constrained modulus of the soil would be,

M = ρ * Vp2

The Poisson’s ratio, µ, can be calculated as,

µ = [(M/Gmax – 2)/(2M/Gmax – 2)]

The elastic modulus can be determined using the following equation,

E = 2Gmax(1 + µ)

The complete test setup showing a test in progress is shown in Figure 2. Typical signals captured by a receiver are shown in Figure 3, from which it is quite easy to accurately identify the arrival of the first wave. Then, the elastic modulus E, shear modulus Gmax, the constrained modulus M, and Poisson’s ratio µ can be calculated using the abovementioned equations. Results Obtained Two different types of granular soils were used in the testing of this device. Nevada sand was one of the sands used. The Nevada sand used has a grain size distribution as shown in Figure 4. The other sand used in the testing of this device was a mixture of different sands found in the laboratory at Case Western Reserve University. The grain size distribution of this soil is provided in Figure 5. As can be seen, this sand was a much coarser-grained sand than the Nevada sand and can be classified as well-graded, whereas, the Nevada sand is classified as poorly-graded. Tests were conducted on the two different soil specimens described above. The samples were prepared in a steel container with a diameter of 29 cm and an overall height of 40 cm. The steel container had enough lateral stiffness to simulate a Ko condition that is characteristic of conditions found in the field. Soil samples were created by pouring the sands into the container using a hopper. In the creation of each sample, the height and rate of pouring were kept constant so as to achieve uniformity throughout the soil sample. After the sample is ready, the cone penetrometer is pushed into the soil slowly until the piezoelectric sensors reach the specified depth. Tests are then conducted to measure the velocities of the s- and p-waves. Then, the cone penetrometer is pushed to deeper positions in the soil to obtain other measurements. The sample container used in the laboratory limited the measurement of all parameters to a depth of about 15 cm. For tests in the field deeper sections of subgrade can be characterized, but in the laboratory, the depth was restricted by the height of the model container. Measurements of the velocities of the s- and p-waves were made at the same locations (though not at the same time) so that Poisson’s ratio at that particular depth can be calculated. To ensure the repeatability of the experimental data, each test was conducted on a second sample prepared in a similar fashion. Very good repeatability of the results was achieved. Typical results are shown in Figures 7 and 8. The sample of Nevada sand had a mass density of approximately 1,451 kg/m3 and a void ratio of 0.84. The sample of coarser-grained mixed sand had a mass density of approximately 1,411 kg/m3 and a void ratio of 0.89. From the data recorded by the sensors on the cone penetrometer, it was found that the shear modulus and constrained modulus increase gradually with increasing depth as the effective confining pressure increases. The Poisson’s ratio was on the average of 0.36 which is well within accepted range of 0.2 to 0.4 for loose sands. Also shown in the figures are the results of shear modulus calculated using a commonly known empirical formula by Hardin and Richart (1963) for sand. This equation provides very good agreement with the experimental results.

102

Page 125: nasa / ohio space grant consortium 2005-2006 annual student

Proposed Improvements There are several ways in which the current piezo-cone penetrometer system can be improved and made more applicable for use on the Moon and Mars. First, the penetrometers can be scaled down in size and constructed entirely of a lightweight aluminum alloy. Due to the limited space and weight requirements of the space shuttles, a smaller and lighter design of the penetrometers will make them more practical and economical for space travel. Second, the addition of sensors on the side of the push rods and on the cone tips will enable the penetrometers to test for side friction and tip resistance. Thus, a more in-depth investigation of the soil properties can be carried out. Third, the integration of the system for use on land rovers is necessary. Thus the testing can be performed with minimal human interaction. The rovers can be used to perform testing in areas that may be unsafe for human activity. Fourth, the penetrometers can be made waterproof by milling an area in the steel to insert and o-ring between the two plates. The sensors can be coated with epoxy for both waterproofing and protection against abrasions. In addition, the system needs to be protected from dust and debris that might interfere with the mechanics of the system. These improvements are expected to greatly improve the quality of the current piezo-cone penetrometer system and make it much more applicable for use in space. Significance of the Piezo-Cone Penetrometer: The development of a regolith penetrometer will have a huge impact on the utilization of soil on the Moon and Mars. The Moon and Mars host numerous raw materials that can be put to practical use as humanity expands outward into the solar system. The establishment of a base of operations on the Moon in preparation for and implementation of further exploration of the solar system is in the space program’s plans for the near future. Vital to this goal is the need to learn to “live off the land” on the Moon, which will involve the “In-Situ Resource Utilization” (ISRU) of the materials on the lunar surface for a variety of uses. Such explorations for lunar ISRU will consist of: facilities construction, regolith digging and moving, trafficability including roads and landing pads, microwave processing, conventional heat sintering, dust abatement (including abrasiveness, gravitational settling, its pervasive nature, and physiological effects), mineral beneficiation, cement manufacture, et cetera. In addition, the surface mobility (rovers), scientific instrument, and EVA (extra vehicular activity) societies also must take into account the properties of the lunar soils. The diverse engineering and material science studies that are crucial for these proceedings will mostly deal with the lunar regolith and soil for their starting materials on the Moon. This is where the use of the regolith penetrometer comes into play. In order for any of these events to take place on the Moon and then on Mars, it is vital to have an efficient way to test the in-situ soil properties. Without the knowledge of these soil properties there can be no construction, no digging or moving of soils, no method of dust abatement, et cetera. The development of this device is crucial to the future of the space program. Figures

Figure 1. Current piezo-cone penetrometer. Figure 2. Laboratory setup.

103

Page 126: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Typical signal displayed on oscilloscope. Figure 4. Grain size distribution of Nevada sand.

Grain Size Distribution

0

10

20

30

40

50

60

70

80

90

100

0.010.1110

Diameter (mm)

Perc

ent F

iner

By

Wei

ght

Figure 5. Grain size distribution of coarser-grained sand.

Test Results on Nevada Sand (Mass Density 1452 kg/m 3 )

0.00

5.00

10.00

15.00

20.00

25.00

30.00

35.00

40.00

0 2 4 6 8 10 12 14 16

Depth (cm)

Mod

ulus

(MPa

)

0.0000

0.1000

0.2000

0.3000

0.4000

0.5000

0.6000

0.7000

0.8000

0.9000

1.0000

Pois

son'

s R

atio

Constrained Modulus

Max Shear Modulus (Test)

H-R (1963) Equation for Max Shear Modulus

Poisson's Ratio

Figure 6. Test results for loose Nevada Sand. (mass density = 1,451 kg/m3, void ratio = 0.84)

104

Page 127: nasa / ohio space grant consortium 2005-2006 annual student

Test Results on Mix Coarser-Grained Sand (Mass Density 1412 kg/m 3 )

0.00

5.00

10.00

15.00

20.00

25.00

30.00

35.00

40.00

0 2 4 6 8 10 12 14 16

Depth (cm)

Mod

ulus

(MPa

)

0.0000

0.1000

0.2000

0.3000

0.4000

0.5000

0.6000

0.7000

0.8000

0.9000

1.0000

Pois

son'

s R

atio

Constrained Modulus

Max Shear Modulus (Test)

H-R (1963) Equation for Max Shear Modulus

Poisson's Ratio

Figure 7. Test results for loose coarse-grained mix sand. ( mass density = 1,411 kg/m3, void ratio = 0.89)

Acknowledgments The author would like to thank the Ohio Space Grant Consortium for their continued support with this ongoing research and to Dr. Xiangwu Zeng and the faculty at Case for their academic contributions and financial assistance throughout the development of this project. References 1. ASTM, 1985, “Annual Book of Standards,” Vol. 04.08 Soil and Rock; Building Stones, American

Society for Testing Materials, Philadelphia, PA, 667-701. 2. Drnevich, V.P., 1985, “Recent developments in resonant column testing,” in Proceedings of Richart

Commemorative Lectures, American Society of Civil Engineers, pp. 79-107. 3. Dyvik, R. and C. Madshus (1985). Lab Measurements of Gmax Using Bender Elements. Advances in

the Art of Testing Soils Under Cyclic Conditions. ASCE Conference, Detroit, MI, Geotechnical Engineering Division, New York, pp. 186-196.

4. Hardin, B.O. and Richart, F.E., Jr. (1963). Elastic Wave Velocities in Granular Soils, Journal of the Soil Mechanics and Foundations Division. ASCE, Vol. 89, No. SMI, pp. 33-65.

5. Hardin, B.O. and Drnevich, V.P. (1972). Shear Modulus and Dampening in Soils: Design Equations and Curves. Journal of the Soil Mechanics and Foundations Division. ASCE, Vol. 98, No. SM7, 1972, pp. 667-692.

6. Hryciw, R.D., and T.G. Thomann (1993). Stress-History-Based Model for Ge of Cohesionless Soils. ASCE Journal of Geotechnical Engineering, Vol. 119, No. 7, pp. 1073-1093.

7. Jovicic, V. Coop, M.R., and M. Simic (1996). Objective Criteria for Determining Gmax from Bender Element Tests. Geotechnique, London, Vol. 46, No. 2, pp. 357-362.

8. Jovicic, V., and M.R. Coop (1998). The Measurements of Stiffness of Clays with Bender Element Tests in the Triaxial Apparatus. Geotechnical Testing Journal, ASTM, Vol. 21, No. 1, pp. 3-10.

9. Thomann, T.G., and Hryciw (1990). Laboratory Measurement of Small Strain Shear Modulus under Ko Conditions. Geotechnical Testing Journal, ASTM, Vol. 13, No. 2, pp. 97-105.

10. Viggiani, G., and J.H. Atkkinson, J.H. (1995). Interpretation of Bender Element Tests. Geotechnique, Vol. 45, No. 1, pp. 149-154.

105

Page 128: nasa / ohio space grant consortium 2005-2006 annual student

11. “What’s Right and Wrong About Infrastructure.” CIVL 1011 – Civil Engineering Measurements: p. 1-4. Online. Internet. 5 Jan. 2005. Available at http://www.ce.memphis.edu/1101/interesting_stuff/infrastructure.html.

12. Zeng, X. and B.Ni (1998). Application of Bender Elements in Measuring Gmax and Sand under Ko Condition. Geotechnical Testing Journal, ASTM, Vol. 21, No. 3, pp. 251-263.

13. Zeng, X. and B.Ni (1999). Stress-Induced Anisotropic Gmax of Sands and Its Measurements, ASCE Journal of Geotechnical Engineering, Vol. 125, No. 9, pp. 741-749.

14. Zeng, X., Figueroa, J.L. and L. Fu. (2003a). Measurements of Base and Subgrade Layer Stiffness Using Bender Element Technique. ASCE Geotechnical Special Publication: Recent Advances in Materials Characterization and Modeling of Pavement Systems, in press.

15. Zeng, X., Figueroa, J.L. and Fu, L. (2003b). Measurement of Base and Subgrade Layer Stiffness Using a Cone Penetrometer Equipped with Piezoelectric Sensors. Proceedings of International Conference on Highway Pavement Data, Analysis & Mechanistic Design Applications, September, Columbus, Ohio.

16. Zeng, X., Figueroa, J.L. and Fu, L. (2003c). Characterization of Subgrade Materials Using a Cone Penetrometer Equipped with Piezoelectric Sensors, paper submitter to TRB 2004 convention.

17. Zeng, X., L. Fu, and J.L. Figueroa (2004). Characterization of Subgrade Materials Using a Cone Penetrometer Equipped with Piezoelectric Sensors, Proceedings of TRB Annual Conference, January, Washington D.C.

106

Page 129: nasa / ohio space grant consortium 2005-2006 annual student

Analysis of Composite Materials in Spacecraft Using Green’s Function

Student Researcher: Therese M. Hurtuk

Advisor: Dr. Ernian Pan

The University of Akron Civil Engineering Department

Many materials are used to build everything from roads to homes to spacecraft. Steel and aluminum are some common materials which, although it takes a long time to learn about their expected behavior, they end up being really quite predictable. Today we need more than these simplified materials to build structures with the top efficiency especially aircrafts and spacecrafts. So although there are many elements known to man we need more to fit our desired properties. Therefore, combined materials called composites allow us to reach our ideal material. Desired behaviors like high temperature resistance and weight reduction are several examples that allow us to achieve to a closer ideal material. When analyzing these materials our old stress-strain analysis can no longer be applied directly. To know how these helpful materials can be understood the Green’s Functions are applicable for the unexpected anisotropic materials that make up composites. Green’s function in addition to boundary element analysis provides calculations for the deformation and stress analyses of odd shapes and material discontinuities. Also, an advantage of Green’s function is that through the use of Maxwell’s reciprocity theorem the domain problem can be reduced to the boundary only. This is so if the virtual-force method is applied along with Green’s function. The virtual-force method consists of applying a distribution of forces outside the domain of the solution. From both methods a boundary equation is reached. Green’s function is actually very applicable to numerous general physical problems. But to focus on its use for composite materials the main general dependences are the differential equation, the body shape, and the type of boundary conditions present. To discuss some potential uses of composites in space it is quite difficult to describe them. Everyday labs across the world are testing new ideas and combinations so the discussion is really a continual one. However, some clear examples include reinforced concrete. It includes a binder which is cement in addition to reinforcements of gravel and rebar. These individual pure materials are called constituents and usually include a binder or matrix and reinforcement as in the case of concrete. The matrix materials are usually made of plastics, metal or ceramic. The reinforcement comes in three ways: particulate, discontinuous fiber and continuous fiber. Particulates are simple small particles roughly the same size in all direction and fibers are with the sizes varying in different directions. So discontinuity includes such reinforcement as chopped and milled fibers whilst continuity has little or no breaks in the reinforcement. Therefore, great strengths can be found in continuous reinforcing fibers and they are then used in high performance materials like aerospace structures. Therefore, composite materials are a very useful material for expanding desired properties in material. However, in order to analyze the new and ideal behaviors Green’s function can be applied as part of the calculations in the analysis. Appendix The great Green’s function concept that I learned from my Theory of Structure Course. The Green’s function G(xf;xs) is a two-point function with the source point at xs and field point at xf. The following figures explain the difference between Green’s function and the influence line. The major difference between shear and moment diagrams as compared to influence lines is that shear and bending moment diagrams show the variation of the shear and the moment over the entire structure (varying field point xf) for loads at a fixed position (fixed source point xs). An influence line for shear or moment shows the variation of the function at one section (fixed field point xf) cause by a moving load (varying source point xs).

107

Page 130: nasa / ohio space grant consortium 2005-2006 annual student

References www.boulder.nist.gov/div853/greenf/continuum.html

www.engr.unl.edu/~glibrary/home/whatisG/whatisG.html

http://composite.about.com/od/aboutcompositesplastics/l/aa060297.htm

fixed field point xf varying source point xs

fixed source point xs varying field point xf

1 kNx B •

B

Influence line for VB V-diagram

Pb/L

-Pa/L

Pab/L M-diagram B

Influence line for MB

108

Page 131: nasa / ohio space grant consortium 2005-2006 annual student

Pressure Attenuation in Pulse Detonation Combustors

Student Researcher: Douglas K. Huseman

Advisor: Dr. Ephraim Gutmark

Graduate Students: Aaron Glaser, Nicholas Caldwell

University of Cincinnati Department of Aerospace Engineering

Abstract: The relationship between pressure attenuation across an axial flow turbine and the firing frequency is presented for a pulse detonation combustor (PDC) system. The PDC system consists of a single detonation tube, an axial flow turbine, and a 90 degree turn after the turbine for exhaust gas. Sound pressure level (SPL) attenuation data was taken both at the outlet of the turn and immediately after the turbine for multiple frequencies. By comparing these two data sets, a relationship between turn attenuation and firing frequency could be made. Project Objectives Research into characterizing the acoustic signature of a PDE has only recently begun. One of the first groups to perform such an investigation was Dittmar et al. at NASA Glenn Research Center1. This group found that increasing the firing frequency of a PDE system decreases the overall sound pressure level (OASPL) when operating at constant thrust conditions. Dittmar et al. also found that a PDE system operation at 60 Hz was 20dB louder than commercial aircraft. Allgood et al.2 and Glaser et al.3 both found that when operating at a constant fill fraction the OASPL increased logarithmically with the firing frequency. The high OASPL of a PDE system is a major concern to researchers. A pulse detonation combustor system has the potential to reduce the OASPL since the shock waves from the tubes are broken up across the turbine and the outlet. This experiment investigated the SPL attenuation across the axial flow turbine in a PDC system. Methodology Used The Pulse Detonation Combustor Facility at the University of Cincinnati has six 1” diameter detonation tubes orientated in an annular array. These tubes are attached to an axial flow turbine with a 90 degree outlet. In this experiment only one detonation tube was used. Tests were run at firing frequencies of 1, 4, 7, 10, 14, 17, and 20 Hz with both the fill fraction and equivalence ratio set to 1. Two separate tests were conducted. For both tests a pressure probe was located upstream of the turbine at the outlet of the detonation tube. In Case 1, a second pressure probe was located at the turbine outlet after the turn and in Case 2, the second pressure probe was located directly downstream of the turbine. Each frequency was fired for five detonations. After five detonations, the PDC tube reaches an approximate steady-state condition and repeatable data can be taken. Pressure data was sampled at 5MHz in order to resolve the detonation waves as they cross the probes. For the fifth detonation the maximum pressure difference across the wave is recorded upstream of the turbine and then at the second location. The sound pressure level in decibels at any location is then given by the following equation4:

⎟⎟⎠

⎞⎜⎜⎝

⎛=

refppSPL log20

In this equation pref is taken to be 20µPa, which is commonly used in acoustic work. The peak pressure attenuation across the turbine is then just the difference between the SPL before and after the turbine.

109

Page 132: nasa / ohio space grant consortium 2005-2006 annual student

Setting P1 as the peak pressure directly upstream of the turbine and P2 as the peak pressure at the second location, the attenuation in decibels can be found through the following equation:

⎟⎟⎠

⎞⎜⎜⎝

⎛=⎟

⎟⎠

⎞⎜⎜⎝

⎛⎟⎟⎠

⎞⎜⎜⎝

⎛−⎟

⎟⎠

⎞⎜⎜⎝

⎛=−=

2

12121 log20loglog20

pp

pp

pp

SPLSPLnAttenuatiorefref

Each frequency was run three times and the average attenuation value was calculated. The attenuation values for the turn are then calculated by subtracting the average attenuation of Case 2 from the average attenuation of Case 1. Results The figure below show the average SPL attenuation values for the turbine and the turn separately. The equations shown on the charts are a linear curve fit that relates the attenuation to the firing frequency.

Average Turbine Attenuation vs. Frequency

y = -0.1077x + 16.868

0

2

4

6

8

10

12

14

16

18

20

0 5 10 15 20 25

Frequency(Hz)

Att

enua

tion(

dB)

Average Turn Attenuation vs. Frequency

y = 0.0833x + 12.324

0

2

4

6

8

10

12

14

16

18

0 5 10 15 20 25

Frequency(Hz)

Atte

nuat

ion(

dB)

Figure 1. SPL Attenuation as a function of firing frequency.

These charts show that there is a correlation between the SPL attenuation and the firing frequency. However, the trend for the turbine has an opposite slope than the trend for the turn. Combining these two curves results in a slight decrease in the overall attenuation with an increase in frequency. The overall SPL attenuation was seen to range from 28 to 29 dB over a frequency range from 1 to 20 Hz. References 1. Dittmar, J., Elliot, D., and Horne, S., “The Far Field Noise of a Pulse Detonation Engine”,

NASA TM-2003-212307 2. Allgood, D., Glaser, A., Caldwell, N., and Gutmark, E., “Acoustic Measurements of a Pulse

Detonation Engine”, AIAA 2004-2879, 25th AIAA Aeroacoustics Conference, 10-12 May 2004, Manchester, United Kingdom.

3. Glaser, A., Caldwell, N., and Gutmark, E., “Experimental Investigation into the Acoustic Performance of a Pulse Detonation Engine with Ejector”, AIAA 2005-1345, 43rd AIAA Aerospace Sciences Meeting and Exhibit, 10-13 January 2005, Reno, NV.

4. Hassall, J.R., and Zaveri, K., Acoustic Noise Measurements, 5th ed., Bruel-Kjaer, Denmark, June 1988, p. 33.

110

Page 133: nasa / ohio space grant consortium 2005-2006 annual student

Scientific Ballooning Applied to Atmospheric Temperature Analysis and Aerial Photography

Student Researcher: Maurice Jefferson

Advisor: Dr. Augustus Morris, P. E.

Central State University Manufacturing Engineering Department

Abstract Central State University is in the formative stages of establishing a student satellite program to provide opportunities leading students to choose careers in the aerospace fields. A major goal of the program is to routinely launch scientific payloads to altitudes reaching 50,000 to 100,000 ft using helium-filled weather balloons. This effort emphasized the planning, acquisition, and operation of a successful ballooning mission. Parallel to this effort, a payload enclosure was designed to house the instrumentation and power needed to collect atmospheric temperature data and aerial photographs during the mission automatically. A Hobo data logger is a small and inexpensive device, and it was easily programmed to sample and store the temperature data. A Canon ELPH camera was interfaced with a simple electronic timer circuit to take photographs of the landscape at fixed intervals of time. A global positioning system is be used for tracking the balloon satellite. The temperature versus altitude data collected was compared with the U.S. standard atmospheric model of the Thermosphere referenced by the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the United States Air Force (USAF). Project Objective To become familiar with the basic ballooning hardware and understand how you would fill a balloon appropriately and launch the balloon correctly. Establish a reliable procedure for filling, launching, and recovering data from the balloon. Analyze the weather conditions to predict the ideal day to conduct the experiment using the Balloontrak software. Also collect temperature data by using a HOBO data logger and compare the results with the U. S. temperature model. Methodology Used The components in the balloon satellite is: 8ft. weather balloon, foam core material, 55 timer circuit, camera, HOBO, braided nylon 100lbf string, 9ft. nylon parachute, Kenwood TH-D7A, Garmin GPS25-LVC, Garmin GPS II Plus. The balloon would be launched from ground level. The payload systems are prepared and sealed before being delivered to the launch site. At the launch site, they are harnessed to the balloon and recovery system. The balloon is filled at the launch site with helium. Before release of the balloon, the communications systems and scientific payloads are powered up and initialized. Anchor lines are used to slowly allow the entire balloon assembly to rise into the air, until both capsules are lifted off the ground. The balloon rises from the launch site, recording scientific data, and transmitting a GPS locator signal for the recovering team. The balloon trajectory during the ascent phase depends on the strength and direction of upper level winds in the atmosphere. These winds change depending on the altitude, as well as the season.

As the balloon rises, the atmospheric pressure decreases (by a factor of 1/e = 0.368 approximately every 8500 meters). Because it was pressurized at a lower altitude, the balloon expands in response to the decrease in external atmospheric pressure.

Eventually, the external pressure will drop to such a low level that the balloon will expand to the breaking point of the material. At this point, the balloon bursts, and begins to fall.

111

Page 134: nasa / ohio space grant consortium 2005-2006 annual student

Once it begins to fall, the parachute expands and slows the rate of descent. As in the ascent phase, upper level winds will push the vehicle around, causing its trajectory to drift. The recovery team will monitor the transmitted GPS signal to keep them as close as possible to the ultimate landing site of the vehicle. The vehicle will eventually come to rest somewhere on the ground -- not necessarily in an easily accessible location! If all goes well, it will continue to transmit GPS coordinates which will allow the recovery team to track it down. Results Obtained The physical components of the balloon satellite are completed. The operation plan is in order to be executed. However there are some problems with the communication system of the balloon satellite. The communication system is an intricate part of the tracking of the balloon satellite. The communication system has to work very accurately. Without the communication system functioning properly the balloon satellite launch is delayed. Significance and Interpretation of Results The launch is very close to being conducted. All of the physical components of the balloon satellite are assembled together. When the problem with the communication system is rectified a balloon launch will be conducted. Acknowledgments

• Dr. Augustus Morris, P. E. • Dr. Abayomi Ajayi-Majebi • National Science Foundation • Ohio Space Grant Consortium

References 1. http://scipp.ucsc.edu/outreach/balloonfest2003/index.html2. http://spacegrant.montana.edu/borealis 3. www.nasa.com4. www.noaa.com

112

Page 135: nasa / ohio space grant consortium 2005-2006 annual student

Newton’s 3 Laws of Motion

Student Researcher: Jennifer M. Johnson

Advisor: Jennifer Secor

Cedarville University Department of Science and Mathematics

Age Range: Grades 6-8 (could be adapted for high school) Time Frame: 3-4 days depending length of classes Abstract This project incorporates technology and NASA material into a unit on Newton’s Laws of motion for a middle school math/science elective class. The unit extends several days and begins with an introduction to Isaac Newton and his laws of motion. During this day the students work in groups to research each of Newton’s laws and present their findings to the class. The introduction day is followed by several days of experiments where the students are performing experiments to see how Newton’s laws play out in everyday life. This includes performing a “magic” coin trick and seeing how far a car will role down a track when the amount of weight in the car is changed. The main focus of the project is the experiment for Newton’s 3rd law of motion in which the students learn about the process of engine thrust in rockets. They also learn about what it takes to send a rocket into space. Following this the students have the chance to build their own balloon rocket and see how much weight it can carry. The unit concludes with a summary of what they have learned. Objectives 1. Students will be able to articulate the main points of each of Newton’s 3 laws. 2. Given a situation students will be able to state which of Newton’s laws it pertains to. 3. The students will be able to describe how what the see happening in the lab pertains to each of

Newton’s laws and they will be able to explain why it happens based on Newton’s laws. 4. Students will develop problem solving skills and learn to work together in a group. Ohio Content Standards Science Standards: Science and Technology - Design a solution or product taking into account needs and constraints (e.g., cost, time, trade-offs, properties of materials, safety and aesthetics). Math Standards: Measurement - Use problem solving techniques and technology as needed to solve problems involving length, weight, perimeter, area, volume, time and temperature. Mathematical Processes - Clarify problem-solving situation and identify potential solution processes; e.g., consider different strategies and approaches to a problem, restate problem from various perspectives. Lesson Procedure Day 1 ( Approx 45 min): Materials: Poster paper, markers, crayons, handouts on each of Newton’s laws, and background information about Isaac Newton. Introduction: Start off the lesson by reading a paragraph of background information about Isaac Newton. Do not tell the students who you are reading about and have them try to guess. When most of the students have had an opportunity to guess and/or they have guessed correctly, move on to the lesson.

113

Page 136: nasa / ohio space grant consortium 2005-2006 annual student

Main Activity: 1. Split the class up into 3 groups and assign each group one of Newton’s 3 laws of motion. 2. Give each group a different handout on the specific law that they are going to be studying.

In their group it will be their job to read about and understand their law. 3. In their groups give them time (about 20 minutes) to make up a poster on their law. They

also need to come up with some way to teach the class the important aspects of the law. They can do this through a skit, simply explaining it or doing a demonstration.

4. After they are done making the posters have each group show their poster and teach their law to the class. If presentations are getting long, limit them to 5 minutes.

5. When they are all finished conclude the lesson by recapping the major points of each of the laws and answering any questions that the students might have about them.

Day 2 (Approx 1 hour 30 min): Materials: Lab sheets, balloons, paper clips, paper cups, clothes pins, tape, fishing line, and straws (For the complete lesson plan and student worksheet see the NASA website http://www.nasaexplores.com/show_58_teacher_st.php?id=021218135338 ) Introduction: Ask the students a few questions as a ways of checking their understanding of the concepts from the last class. Main Activity:

1. Divide the class into groups of three students. 2. Once they are in their groups pass out the student worksheet and explain what we they are

going to be doing today. It is a good idea to go through the instructions with them first and make sure they don’t have any questions.

3. Give them the materials and instruct them to build their rocket and test it first with no weight on it. After they have all tested their rocket have them hypothesize how much weight they can lift to the ceiling.

4. Allow them to make changes to their rocket and test it several more times to see how much weight they can lift. As they are doing this they need to be filling in their lab sheets.

5. At the end record how much weight each rocket lifted and compare the rocket designs of the different groups.

6. Have them finish up their lab sheets and conclude the lesson with a discussion about rockets and how they illustrate Newton’s 3rd law of motion.

Day 3 (Approx 45 min): Materials: Toy car, lab sheets, meter sticks, washers or pennies, paper cups, masking tape, and index cards. (See www.sciencespot.net/Media/newtonlab.pdf for lab sheets and other possible experiments) Introduction: Remind the students of Newton’s first and second law and tell them that today we are going to be experimenting with these laws. Main Activity: 1. Have the students chose a partner and move to sit with this person. 2. Pass out the lab sheet for the Newton’s first law experiments and explain it step by step to them. After you have explained the lab, pass out the materials. 3. Have them work with their partner to fill in the lab sheet. 4. Gather the class back together and pass out the lab sheet for Newton’s second law. As a class, work on completing this experiment together. Make sure that all students record the data. 5. When everyone is finished wrap up our discussion of Newton’s 3 laws of motion and ask them a few questions to check their understanding.

114

Page 137: nasa / ohio space grant consortium 2005-2006 annual student

Assessment To assess this unit I collected and graded the students’ lab sheets. I also monitored the students’ progress during the whole project time and gave them a participation grade that was based on how well they followed instructions and worked in their groups. At the end of the unit I informally assessed the students’ knowledge about Newton’s 3 Laws by asking them verbal questions and listening to their responses. Since this was an elective class I didn’t give a test or quiz but this could have easily been done. Results/Reflections on the Project I was very pleased with the results of this unit on Newton’s 3 laws. The introduction day in which the students made posters and researched and presented their findings to the class went really well. The students were very creative and I think that having them present what they learned to the class really helped them to gain a deeper understanding of the material. The mini-experiments for Newton’s 1st and 2nd Law worked out well. It gave the students a chance to see the practical applications of Newton’s Laws and to see how Newton’s Laws explain much of the motion that we see around us everyday. The day that we did the rockets was a highlight for many of the students. They enjoyed being able to build their own rocket and test them. This really helped them to see how Newton’s 3rd law applied to real-world situations. Also, building the rockets helped the students to develop problem solving skills and got them to think about real space issues and what can be done to make rockets more efficient. The unit relied heavily on group work and interactions between the students and as a whole they really worked well together. Overall the unit was a success because it caused the students to learn the material while having a good time. Weeks later the students still remembered the experiments we had done and the importance of Newton’s 3 Laws of Motion. The unit also sparked a lot of good questions and interest in space technology. Conclusion In conclusion, I really enjoyed this project and think that it was very beneficial for both me and the students. It was a really great opportunity to do something different with the students and they really enjoyed it. If I were to teach the class again I would definitely include this unit. The students not only enjoyed what they were doing but they learned through the process. This is what real teaching should be about.

115

Page 138: nasa / ohio space grant consortium 2005-2006 annual student

Design of a Controlled Environment Simulating the Extreme Temperatures of the Tropopause:

A Test Bed for Thermal Analyses of BalloonSat Payloads

Student Researchers: Shannon Adonis Jones and Charlita C. Lawrence

Advisor: Dr. Augustus Morris P. E.

Central State University Manufacturing Engineering Department

Abstract BalloonSat programs are growing at an ever-increasing rate across the country and in Ohio. These programs give students the opportunity to explore earth and space from incredible altitudes up to 100,000 feet. At these heights, the atmospheric conditions are very similar to those in deep space. Students involved in BalloonSat programs not only have the chance to conduct research, but the harsh environment of near space necessitates the sound engineering design of payloads able to collect data and execute commands without fail. BalloonSats face the greatest thermal challenge while traveling through the tropopause. The tropopause is the constant temperature interface between the troposphere and the stratosphere. It spans altitudes roughly between 33,000 and 66,000 feet above earth. Typical temperatures in the tropopause are approximately -60 degrees Celsius. Many electrical devices begin to fail at temperatures below -20 degrees Celsius. Appropriate thermal design of the payload is paramount to the success of any BalloonSat mission. As potential payloads are designed, it is desired to test the payloads in temperature environments simulating the tropopause. The objective of this project is to design an environment suitable for testing the payloads thermally. Primarily, the designed environment must be capable of maintaining a constant temperature similar to the tropospheric environment. The test bed was constructed with Styrofoam, using dry ice as a heat sink. A known heat source, similar to that generated from a payload, was placed in the sealed environment. Temperature of the enclosed air was monitored over time using standard temperature sensors and a computer data acquisition system. Results demonstrate the ability to test BalloonSat payloads with such a system. Project Objective: To simulate an environment similar to what the payload is subjected to the stratosphere. Set instruments to collect data on the payload as it cools in a stratosphere-like environment and analyze data against a standard thermal model. Also determine if temperature in the payload can withstand extremely low temperatures. Methodology Used The material selected for the payload is Foam core. Foam core sheet is a versatile and inexpensive material. It cuts and forms easily with sheet-metal precision. The Styrofoam core material should provide a moderate thermal R-value. The test bed was constructed with Styrofoam, using dry ice as a heat sink. Temperature of the enclosed air was monitored over time using thermal couples which transfer the data to a computer so the data can be analyzed. A thermal system of this type can model as first order different equation: θ’(w) + Aθ(w)= Aθ(a) … A=(1/mθ Rθ) Solution: θw(t)= θa+(θw(o)-θa)e^-at One way to check the solution is to apply linear regression to the data Solution in linear form: ln(θw(t)- θa)=ln(θw(0)- θa-at)

116

Page 139: nasa / ohio space grant consortium 2005-2006 annual student

Results Obtained Temperature vs. Time

0

5

10

15

20

25

0 500 1000 1500 2000 2500

Seconds

Tem

pera

ture

This is a Temperature versus Time graph which shows is displaying the temperature decreasing over time.

Regression Statistics

Multiple R

0.995874

R Square

0.991766

Adjusted R Square

0.991725

Standard Error

0.009512

Regression

3.83.85

3.93.95

44.05

4.14.15

4.2

0 500 1000 1500 2000 2500

Seconds

ln(T

hw-T

hA)

Observations

205

A regression analysis was conducted on the temperature inside the payload minus the ambient air (-60C).

Significance and Interpretation of Results After the trials were conducted, the regression analysis proved that the thermal model is efficient. The components inside the payload can’t function if the temperature is -9.78C inside the payload. The experiment conveyed that the lowest temperature inside the payload is 3.5C. This is concluding that the foam core is a good insulator and the payload will withstand and function in temperatures reaching -60C. Acknowledgments

• Dr. Augustus Morris, P. E. • Dr. Abayomi Ajayi-Majebi • National Science Foundation • Ohio Space Grant Consortium

References 1. http://scipp.ucsc.edu/outreach/balloonfest2003/index.html 2. http://spacegrant.montana.edu/borealis 3. www.nasa.com 4. www.noaa.com

117

Page 140: nasa / ohio space grant consortium 2005-2006 annual student

Protein Interactions in Osteoclast Differentiation

Student Researcher: Naomi E. Kenner

Advisor: Alicia Schaffner, Ph.D.

Cedarville University Department of Science and Math

Abstract Osteoclast cells are hematopoietic in origin and derived from monocyte/macrophage lineage. The osteoclast acts as the primary bone-resorbing cell in the body. Abnormal osteoclast activity results in disease such as osteoporosis. For proper bone resorption osteclasts must differentiate from their precursor cells. The differentiation is initiated by a ligand receptor interaction involving receptor activator of NF-κB (RANK) and RANK ligand (RANKL). The interaction of RANKL-RANK results in activation of various signaling cascades during osteoclast development and activation. Our research seeks to examine the signaling cascades in response to RANK activation. It has been shown that RANK activates the p38 kinase which phosphoryates MITF, thus activating this transcription factor. These proteins have been shown to play a role in osteoclast development. In order to develop our understanding of osteoclasts differentiation our lab is focused on identifying additional proteins partners of both MITF and p38. A yeast two hybrid assay will be conducted to identify new binding proteins involved in osteoclast development. Project Objectives Bone disease is a growing concern among the American population and medical community. Bone diseases such as osteoporosis, osteopetrosis, osteogenesis imperfect, hypercalcemia, myeloma, and Paget’s disease affect an estimated 50 million Americans. An imbalance in bone remodeling is the cause of all of these diseases. Bone tissue is an ever changing structure that is constantly being broken down and built back up. This process, called bone remodeling, involves cells know as osteoblasts and osteoclasts. Each year these cells resorb (osteoclasts) and rebuild (osteoblasts) approximately 10% of the adult skeleton. From birth to early adulthood the balance of bone remodeling is tilted toward formation. Osteoblasts, the “building” cells, lay down more bone matrix than the osteoclasts breakdown, resulting in bone growth. The most obvious form of this growth is in height as people mature, but it also works in apposition growth as bones grow in diameter. Between the ages of 25 and 35 bone remodeling equilibrates and osteoblasts form bone at the same rate as osteoclasts resorb it. During this age adults have the largest bone mass possible. In later adulthood the balance shifts again this time toward bone loss, with increased osteoclast activity. This is one of the reasons post-menopausal women develop osteoporosis. Improper balance of these three stages in bone construction results in the bone diseases millions of Americans struggle with. Much of recent research in bone development focuses on the osteoclast to answer questions about bone disease. The osteoclast is a giant multinucleated cell of hematopoietic origin through the monocyte/macrophage lineage. Osteoclast differentiation, function, and survival from this lineage are regulated by various proteins from the tumor necrosis family (TNF). TNF family members control osteoclast activity through molecular signaling via signal transduction pathways. The pathway involved in our study is the receptor activator NF-κB ligand (RANKL). RANKL binds to its receptor activator family (TRAF) proteins, fundamentally controlling the osteoclast development (Feng, 2005). Once the osteoclast is activated by TRAF the signal transduction pathway diverges into various directions involving kinase cascades that target specific genes (Reddy, 2004). The RANKL pathway we are interested in involves the activation of the p38 MAPK protein that in turn targets microphthalmia transcription factor, abbreviated MITF (Manskey et al., 2002).

118

Page 141: nasa / ohio space grant consortium 2005-2006 annual student

Figure 1. RANKL-RANK pathways (Reddy, 2004).

The RANKL/p38 MAPK pathway is located on the left of Figure 1, ending in MITF that then enters the osteoclast cell nucleus where it regulates gene expression. The goal of our research is to identify novel proteins that interact with MITF and p38 MAPK. Identifying new proteins in the RANKL/p38 MAPK pathway can lead to a better understanding of the specific ways osteoclasts development is mediated within the cell. With a better understanding of cellular signaling pathways involved in osteoclast development researchers can develop innovative approaches to treating and preventing bone-associated diseases. Methodology To identify novel proteins in the RANKL/P38 MAPK pathway we are using a yeast two hybrid assay, this is a molecular technique used to study protein-protein interactions. The yeast two hybrid assay takes advantage of galactose metabolism in yeast. Galactose is transported into yeast cells and then converted to galactose-6-phosphate by several enzymes which are transcriptionally regulated by various proteins. The protein of our interest is Gal4; Gal4 plays the role of a DNA-binding activator. Gal4 is a modular protein made up of two domains: a DNA-binding domain (BD) and an activation domain (AD). This means that in order for the cell’s DNA to transcribe, or make, a specific protein these two components must connect. It is like putting a key in an ignition. In order for the car to start (representing transcription) the key (activation domain) must be inserted into the ignition (the binding domain). Only then will the car turn on, in the same way a functional Gal4 transcription activator starts cellular transcription. The key idea is that the Gal4 BD must physically bind to the AD to activate protein transcription. This is what the yeast two hybrid technique exploits (Sobhanifer, 2003).

Figure 2. Gal4 transcriptional activator exploited for yeast two hybrid assay (Sobhanifer, 2003).

119

Page 142: nasa / ohio space grant consortium 2005-2006 annual student

To take advantage of this cellular process proteins of interest are attached to these domains. One protein acts as the “bait” and one acts as the “hunter”. The bait protein is the known protein attached to the BD; the “bait” in our study consists of MITF and p38 MAPK. The hunter is composed of a specific cDNA library fused to the AD. Our cDNA library is composed of cDNA from bone marrow. Our bait (MITF and p38 MAPK) is attached to the yeast strain Y187 and our library is fused to yeast strain AH109.

The first step in a yeast two hybrid assay is constructing a DNA-BD fusion. Then generating a cDNA library and transforming this into the AD. With all of our yeast transformed the next step is mating the yeast. This is they actual two-hybrid screening step. The Y187 yeast containing MITF and p38 are mated with the AH109 containing the library. Selecting for recombinant diploid yeast expressing interacting proteins is next. Lastly the diploid yeast is analyzed to identify and verify positive protein interactions.

Results Obtained Currently in our research we have obtained the two yeast strains containing the fused bait and cDNA library. Before mating the yeast, the individual strains must be grown in large quantities. To do this the pre-transformed yeast cells are plated and incubated to attain colonies. We made three separate plates containing the vector only (this yeast only has the Gal4 BD by itself), MITF transformed yeast, and p38 transformants. All of the plates contained a growth SD growth agar selective for the amino acid tryptophan (-trp). When large colonies form they are then inoculated in media to grow and multiply. Our colonies, however, grew slowly and where small in size. Assuming slow colony growth was due to poor plating techniques we re-plated the yeast. Re-plating still resulted in small slow growing colonies. This has been known to happen with transformed yeast. To compensate for the small colonies, our lab protocol suggested adding succinate liquid media of the inoculated cells to increase growth.

Next, using our small colonies we inoculated the yeast in 50 ml of our growth medium composed of SD, -trp, and kanamycin then placed them in a shaking water bath. Unexpectedly none of the yeast cells grew. At first we suspected that our bait (MITF and p38) were toxic to the yeast, because this has been known to happen in yeast-hybrids. This however could not be the reason for no growth because the vector only yeast (our control, containing only the Gal4 binding domain and not our bait proteins) did not grow as well. Eliminating this possible reason we next hypothesized that no growth occurred due to non-viable yeast cells. To test this we inoculated the untransformed Y187 yeast we received from the company in the growth media. This did result in growth, meaning the yeast themselves where viable. Next we hypothesized that the growth media could have become contaminated or improperly prepared. To test this we remade all media, inoculated the yeast cells and still achieved no growth.

Conclusions Currently we are working with supplier, Clontech, to troubleshoot why we are not achieving growth. We are hypothesizing that the problem lies in the transformed yeast cells. The untransformed cells grew, but none of the transformed cells did. To determine if this is the problem we will be re-transforming all of the yeast cells ourselves and starting our experiments over. Until our yeast strains can be grown they cannot be mated, screened, and analyzed for protein interactions. Our research will be continuing until the end of the semester and into the summer. In the event that our yeast cells are able to grow we hope to examine possible novel protein interactions involving the proteins MITF and p38. These proteins are part of a RANKL signal transduction pathway that is involved in osteoclast differentiation. Identifying new proteins and interactions in this pathway may lead us one step closer to preventing and treating bone-associated diseases.

References 1. Feng, Xu (2005). Regulatory roles and molecular signaling of TNF family members in osteoclasts.

Gene, 350, 1-13. 2. Mansky, Kim C., et al. (2002). Microphthalmia transcription factor is a target of the p38 MAPK

pathway in response to receptor activator of NF-κB ligand signaling. Journal of Biological Chemistry, 277 (13), 11077-11083.

3. Reddy, Sakamuri (2004). Regulatory mechanisms operative in osteoclasts. Critical Reviews in Eukaryotic Gene Expression, 14 (4), 255-270.

4. Sobhanifar, Solmaz (2003). Yeast Two Hybrid Assay: A Fishing Tale. BioTeach Journal, 1, 81- 88.

120

Page 143: nasa / ohio space grant consortium 2005-2006 annual student

Comparing and Contrasting ICD-9-CM to ICD-10-CM

Student Researcher: Loretta B. Kish

Advisor: Deborah Hardy

Lakeland Community College Science and Health Division

Abstract ICD (International Classification of Diseases and Related Health Problems) is the international standard diagnostic classification for all general epidemiological and many health management purposes. It was developed by the World Health Organization (WHO), and it is used to classify diseases and other health problems recorded on many types of health and vital records including death certificates and patient medical records. ICD-9-CM (International Classification of Diseases, 9th Revision, Clinical Modification), was designed for the classification of morbidity and mortality information for statistical purposes, and for indexing of hospital records by disease and operations, for data storage and retrieval. It was implemented in 1979, and it is used for many more purposes today than when it was originally developed, such as reimbursement for medical providers. It is now outdated and obsolete and is no longer able to support today’s health information needs. ICD needed to be greatly expanded and restructured to accommodate the need for updated medical knowledge, so the WHO developed ICD-10. The United States has used ICD-10-CM for mortality statistics since 1999 but is virtually the only industrial nation that has not upgraded its morbidity coding to ICD-10. The ICD-9-CM coding system is running out of space, and is unable to accommodate many new codes. ICD-9-CM contains about 13,000 codes whereas ICD-10-CM contains 120,000. ICD-10-CM will enable medical professionals to look at data with much more detail and clarity. ICD-9-CM cannot address the need for greater specificity, advances in medicine and in new diseases. It is difficult to exchange healthcare diagnostic data with healthcare professionals around the world since 99 countries are already using ICD-10 for both mortality and morbidity. The longer the healthcare industry continues to use ICD-9-CM, the more difficult it becomes to share disease and mortality data when such worldwide sharing is critical for public health. For instance, ICD-10-CM would have better documented the West Nile Virus and SARS outbreaks for earlier detection and better tracking. ICD-10-CM also provides the ability to track public health outbreaks such as bio-terrorism. Project Objectives My objective was to demonstrate how holding on to the outdated coding system of ICD-9-CM makes it difficult to collect complete and accurate medical data, and creates “gray” areas with varying interpretation of proper code assignments. The longer we use ICD-9-CM the more difficult it becomes to share medical data with the rest of the world. Results Obtained The differences between ICD-9-CM and ICD-10-CM are many. ICD-10-CM has up to 7 digit alphanumeric codes rather than the current 3 to 5 digit numeric codes of ICD-9-CM. Some chapters have been rearranged and some titles have changed and conditions regrouped in ICD-10-CM. ICD-10-CM has almost twice as many categories as ICD-9-CM. ICD-10-CM features improvement in content and format including the addition of information relevant to ambulatory and managed care encounters, expanded injury codes, and the creation of combination diagnosis/systems codes to reduce the number of codes needed to fully describe a condition. ICD-10-CM will greatly reduce the number of unspecified codes. ICD-10-CM will be able to identify right or left organs and sides of the body, something that ICD-9-CM could not do. ICD-10-CM offers greater specificity in code assignments over ICD-9-CM. ICD-10-CM offers better clinical descriptions and the notes, instructions and guidelines are more clear and comprehensive. The Blue Cross Blue Shield Association along with the management consulting firm, Robert E. Nolan Co., says that ICD-10-CM would have no proven benefits and would result in backlogs

121

Page 144: nasa / ohio space grant consortium 2005-2006 annual student

and delayed payments because of increased time coders would need to properly code claims, but ICD-10-CM’s level of detail actually would, most likely, improve coders’ productivity. Switching to ICD-10-CM would hurt the pocketbooks of the insurance industry. The Blue Cross Blue Shield Association says that ICD-10-CM would cost the insurance industry from 6 billion to 14 billion dollars to make the change to ICD-10-CM. However, any delay in adoption of ICD-10-CM will cause an increase in future implementation costs as the management of health information becomes increasingly electronic and costs of implementing new coding systems increases due to required systems and application upgrades. Unlike the insurance industry, there is widespread support of ICD-10-CM within the healthcare community. Some of the organizations that are in favor of replacing ICD-9-CM with ICD-10-CM are: The National Committee for Vital and Health Statistics, Advanced Medical Technology Association, American College of Obstetricians and Gynecologists, American Hospital Association, American Medical Association, Federation of American Hospitals, and Healthcare Financial Management Association. ICD-10-CM can better accommodate advances in medicine, reduce the number of rejected claims and improve reimbursement, care quality, safety and disease management. Replacing ICD-9-CM with ICD-10-CM is necessary in order to maintain clinical data comparability with the rest of the world. References 1. “Destination 10: Healthcare Organization Preparation for ICD-10-CM and ICD-10-PCS.” AHIMA.

10 April 2006 <http://library.ahima.org/xpedio/groups/public/documents/ahima/bok_022543.hcsp?dDoc...>

2. “Understanding ICD-10.” AHIMA. 5 April 2006 <http://www.ahima.org/icd10/understand.asp> 3. “Value of ICD-10.” AHIMA. 5 April 2006 <http://www.ahima.org/icd10/value.asp> 4. “Testimony of the American Health Information Management Association to the National Committee

on Vital and Health Statistics on ICD-10-CM.” AHIMA. 29 May 2002. 4 April 2006 5. <http://library.ahima.org/xpedio/groups/public/documents/ahima/bok_013552.hcsp?dDoc...> 6. “Frequently Asked Questions.” AHIMA. 5 April 2006 <http://www.ahima.org/icd 10/faq.asp> 7. “Why ICD-9-CM Needs to be Replaced.” AHIMA. 5 April 2006

<http://www.ahima.org.icd10/icd9.asp> 8. “ICD-10: Meeting the Demands of the 21st Century.” AHIMA. 4 April 2006

<http://www.ahima.org.icd10/> 9. “Brace Yourself Now for ICD-10-CM.” JustCoding.com. 29 March 2006

<http://www.justcoding.com/premium/printer friendly.cfm?content id=56481> 10. “ICD-10 Update.” JustCoding.com 10 April 2006 <http://justcoding.com/countdown/> 11. “Talking in Code.” Robert E. Nolan Company, Management Consultants - Health Care Practice:

Article - Tal... 17 April 2006 <http://www.renolan.com/healthcare/article-talking_in_code.htm>

122

Page 145: nasa / ohio space grant consortium 2005-2006 annual student

A Review of Energy Harvesting Potential

Student Researcher: Matthew L. Kocoloski

Advisor: Dr. Kevin Hallinan

University of Dayton Department of Mechanical Engineering

Abstract A study was conducted to determine whether or not energy harvesting technologies could serve as a feasible alternative to fossil fuels in the near future. Energy harvesting technologies refer to those technologies that convert energy from the surrounding environment into a more useful form of energy, namely electricity. A literature review and thermodynamic analysis of various energy harvesting technologies were performed, and these technologies were compared based on first- and second-law efficiency as well as specific power and power density, where applicable. Project Objectives The issue of peak oil production has recently come to the forefront of discussions in economic, engineering, and environmental circles. Many experts predict that peak oil production will occur within the next 20 years, if not before. Following peak oil production, the production of oil will begin to decline, and oil will no longer be able to serve as the primary source for society’s energy requirements. Thus, if the world wishes to continue to develop socially and industrially following peak oil production, alternative energy sources will have to be developed. Energy harvesting technologies represent a very promising alternative energy source, one that could help solve the energy related problems that society will soon face. Energy harvesting technologies refer to those technologies which convert energy from the surrounding environment into a more valuable form, almost always in the form of electricity. Energy harvesting technologies include solar photovoltaic and solar thermal electricity systems, thermoelectric and thermionic devices, wind generators, and piezoelectric devices. Solar photovoltaic and solar thermal electricity systems generate electricity through the conversion of solar radiation, thermoelectric and thermionic devices generate electricity from a temperature difference, wind generators harvest energy from the kinetic energy found in wind, and piezoelectric devices generate power through the conversion of the energy found in vibrations. Although there are additional devices that could be classified as energy harvesting technologies, those classes that were previously mentioned are the most prominent classes and were therefore the classes that were investigated in this project. Some of these technologies have gained widespread use in some areas for large-scale power generation, while other types of energy harvesting technologies are much better suited for small-scale, niche applications. The purpose of this project was to investigate the current state of the art in energy harvesting technologies, as well as the future performance of these technologies that could be projected based on recent developments, and to determine whether or not they could serve as a feasible alternative to fossil fuels in the near future. By conducting a thorough literature review of recent developments in energy harvesting technologies, and analyzing these technologies from a thermodynamic perspective, an accurate assessment of the potential held by energy harvesting technologies could be generated. Methodology Used The primary methodology used throughout the project was simply a thorough literature review. Before performing any useful analysis of the current status or future potential of energy harvesting technologies, it was necessary to understand the current state of the art for these technologies. Thus, it was first necessary to conduct a thorough literature review of recent developments in solar photovoltaic, solar thermal electricity, thermoelectric, thermionic, wind generation, and piezoelectric technologies. Because this project was chiefly concerned with the current and future performance of energy harvesting technologies, there was a greater emphasis placed on the efficiency and power generation of the technologies than on the manufacturing processes used to produce these technologies.

123

Page 146: nasa / ohio space grant consortium 2005-2006 annual student

Following the literature review, thermodynamic analysis of these energy harvesting technologies was conducted in an attempt to determine the current and future performance of these technologies. Although different energy harvesting technologies may be very different in terms of size and function, there are metrics common to the majority of energy harvesting technologies that can be used to compare performance. All energy harvesting devices convert one form of energy into another, and every form of energy conversion has an associated conversion efficiency. In thermodynamic terms, this efficiency is referred to as the first-law efficiency. Although the calculation of the efficiency of a device depends on the energy inputs and outputs, the efficiency for any device can be expressed generically using Equation (1) given below.

in

useful

EE

=1η (1)

In Equation (1), 1η represents the first-law efficiency of the device, and usefulE and inE represent the useful energy out from and total energy into the device, respectively. A higher efficiency indicates that the device converts a greater portion of the total energy input into useful energy, so devices with greater efficiencies are clearly more desirable than those with lower efficiencies Perhaps more important, or at least more indicated of the performance of a given technology, than the first-law efficiency is the second-law efficiency. The second-law efficiency essentially refers to how close a technology comes to achieving the ideal conversion efficiency. Second-law efficiency can be calculated by dividing the first-law efficiency of a device by its theoretical maximum efficiency using Equation (2), shown below.

max

12 η

ηη = (2)

In Equation (2), 2η represents the second-law efficiency, 1η is defined as in Equation (1), and mazη refers to the maximum theoretical efficiency for the device. The theoretical maximum efficiency for a given device can often be obtained throughout an application of thermodynamic laws. In addition to the first- and second-law efficiencies, these devices were compared on the basis of specific power (power generated per unit mass), power density (power per unit volume) and areal power (power per unit area) where applicable. Results Obtained Values for the first- and second-law efficiencies, as well as values for the specific power, power density, and areal power, were obtained for state of the art devices in the classes of energy harvesting technologies mentioned in the Project Objectives. These results were summarized in tables, such as the ones included in the Figures/Charts section. Table 1 provides a comparison of state of the art photovoltaic cells, both for cells that use non-concentrated solar radiation and those that use a concentrator to increase the intensity of the incoming radiation. Values are given for the first- and second-law efficiencies of these technologies in addition to the specific and areal powers. Table 2 gives a similar comparison for solar thermal electricity generating systems, providing the first- and second-law efficiencies for a number of state of the art systems. Table 3 provides a comparison of state-of-the-art thermoelectric devices. Thermoelectric devices convert heat into electricity via the Seebeck effect, and recent advances in thermoelectric material research and manufacturing has significantly increased their efficiencies. Similar tables have been generated for wind generation and piezoelectric technologies, but have not been included with this report due to space constraints.

124

Page 147: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results The research that was performed throughout this project revealed the fact that there are many exciting advances that are either currently taking place or likely will take place in the near future in the area of energy harvesting. Recently designed photovoltaic systems feature appreciably higher efficiencies and specific powers that any previously designed modules. And thermoelectric modules have been recently developed that feature second-law efficiencies of roughly 50%. These technologies, and many others that have been recently developed, have greatly increased the performance of energy harvesting technologies and make them a promising option for future power generation. However, these promising developments are still occurring primarily in a laboratory environment on a very small scale. It will most likely be years before these advancements have a significant impact on commercial power generation. Although these developments will likely crawl towards commercial availability over the next few years due to improved manufacturing processes, this process will require significant time. Additionally, it is important to remember that the performance of energy harvesting technologies is heavily dependent upon the environment in which that technology is used. While these technologies will certainly be capable of effectively generating power in an appropriate environment, they certainly do not represent a simple, catch-all solution to future energy generation needs. Energy harvesting technologies have the potential to play a large role in meeting future energy needs, but only as a component of a larger and more diverse energy generation strategy. Figures/Charts

Table 1. Solar Photovoltaic Cell Comparison.

Non-Concentrated Solar Radiation

Solar Module Type of Cell

First-Law Efficiency

Second-Law

Efficiency

Specific Power (W/kg)

Areal Power (W/m2)

Status Source

BP SX 120 W Rigid 0.10 0.15-0.16 10-15 100-150 Commercially Available

Solardyne.com

Next Generation Ultraflex (NGU)

Flexible 0.18 0.26-0.30 175-220 250-300 Laboratory Testing

Spence, 2004

Lightfoil Flexible 0.15 0.22-0.25 1440 150-200 Laboratory Testing

DayStar Technologies

Theoretical Limit Infinite Multi-Junction

0.61-0.68 1.00 NA NA NA Landsberg, 2002; Wurfel, 2002

Concentrated Solar Radiation

Solar Module Type of Cell

First-Law Efficiency

Second-Law

Efficiency

Specific Power (W/kg)

Areal Power (W/m2)

Status Source

Stretched Lens Array (SLA)

Rigid, Multi-Junction

0.23 0.26-0.27 180 300 Field Testing O'Neill, 2004

CellSaver Rigid, Triple-Junction

0.26 0.30-0.31 Unavailable 500-550 Field Testing Eskenazi, 2004

SLA BlanketFlexible,

Multi-Junction

0.45 0.51-0.53 1000 600 10-20 years O'Neill, 2004

Theoretical Limit Infinite Multi-Junction

0.85-0.88 1.00 NA NA NA Landsberg, 2002; Wurfel, 2002

125

Page 148: nasa / ohio space grant consortium 2005-2006 annual student

Table 2. Solar Thermal Electricity System Comparison (Mills).

System LocationTechnology

Used

Operating Temperature

(C)

Theoretical Maximum Efficiency

First-Law Efficiency

Second-Law

Efficiency

Production Capacity

(MW)

SEGS, LS-3 United States

Parabolic Trough

391 0.55 0.14-0.16 0.25-0.29 80

Solarmundo Belgium LFR 400 0.56 0.10-0.12 0.18-0.21 111CLFR

ConceptNone CLFR 285 0.47 0.19 0.41 100

DECC United States

Paraboloidal Dish

720 0.70 0.24 0.34 NA

SAIC/STM United States

Paraboloidal Dish

720 0.70 0.18 0.26 NA

Solar Tres Spain Single Tower 565 0.64 0.087 0.14 15PS10 Spain Single Tower 680 0.69 0.105 0.15 10PS10

(projected)Spain Single Tower 680 0.69 0.175 0.25 30

Table 3. Thermoelectric Material Comparison, Expected Performance.

Type of TE Material Material ZT

Temperature (K)

First-Law Efficiency

Second-Law

EfficiencyStatus Source

Bulk Bi(2)Te(3) 1.00 400 0.088 0.208 Commericially Available

Hi-Z Technologies (www.hi-z.com)

BiTe-based [n- 1.46 300 0.010 0.225BiTe-based [p-

type]2.40 300 0.014 0.302

PbTe-based [n-type]

1.25 570 0.177 0.271

PbTe-based [p-type]

1.20 500 0.149 0.255

1.00 700 0.181 0.2471.30 900 0.243 0.302

Bulk AgPb(18)SbTe(20) 2.10 800 0.297 0.383 Laboratory Testing

Hsu, 2004

Quantum Well Thin Film

B(4)C/B(9)C-Si/SiGe

4.00 450 0.235 0.455 Laboratory Testing

Ghamaty, 2005

Metal-Based Superlattice

Metal-based superlattices

7.00 300 0.022 0.483 Theoretical Vashaee, 2004

III-V Semiconductor

NanowiresInSb nanowires 3.00 300 0.015 0.338 Theoretical Mingo, 2004

Hsu, 2004AgPb(10)SbTe(12) Laboratory Testing

Laboratory Testing

Thin-Film Superlattice

Laboratory Testing

Venkatasubramanian, 2001

Beyer, 2002

Bulk

Thin-Film Superlattice

Acknowledgments and References: The author would like to thank Dr. Kevin Hallinan for his guidance throughout the project, and both the University of Dayton Research Institute and the University of Dayton Honors and Scholars program for providing funding for this research.

126

Page 149: nasa / ohio space grant consortium 2005-2006 annual student

CFD Analysis of Flow Over a Model Rocket

Student Researcher: Brandon D. Koester

Advisor: Dr. Jed E. Marquart

Ohio Northern University Mechanical Engineering Department

Abstract Building and flying model rockets is a popular hobby, with many enthusiasts building and flying their own rockets. This project examined the air flow over a model rocket and the effect of the placement of the rocket engines on drag using computational fluid dynamics. By varying the distance of engine protrusion on the aft end of the rocket, the drag force on the rocket was altered. Several different cases were investigated. Project Objectives The main objective of this project was to look at the effect of the protrusion depth of rocket engines on the coefficient of drag of a model rocket. A baseline case with no rocket engines was evaluated first to provide data with which to compare the other geometries. From there, the rocket engines were added to the geometry and the velocity of the rocket was varied between Mach 0.1 and 0.6. Finally, the angle of attack was varied to simulate the effect of a crosswind on the value of the coefficient of drag. Methodology The model rocket was modeled as a 43 inch long and 2.70 inch diameter rocket. In order to accurately model the nose cone, diameter measurements were taken each quarter of an inch along the nose cone and recorded. The results were then graphed using Excel® and a fourth degree polynomial curve fit to the data. The points from this curve were then created in ProE Wildfire® and revolved 360 to create the body of the rocket. This baseline file was then saved as an IGIS file to be read into the mesh generation software. The rocket was then imported into Gridgen® in order to create the mesh. For the cases with rocket engines, the rocket engines were created first. The three engines were given a 1-inch diameter with a 0.1-inch exhaust hole and spaced 120 apart. In order to refine the grid located behind the rocket, three separate flow boxes were created. The overall dimensions of the combined flow boxes were 54 inches wide by 1333 inches long. An unstructured, tetrahedral mesh was then chosen for the flow box volumes. The mesh on the faces of the rocket body itself was unstructured, triangular cells except for on the rocket engines where a structured grid was used. Several boundary conditions were imposed. For the basic case with no engines, the leading surface of the flow volume was set as a source, while the back edge was set as a sink. The remaining faces of the flow volume were set as farfield conditions (note that the two domains between the flow boxes were set as baffle faces) and the faces of the rocket were considered to be solid walls. For the case with the rocket engines, the only differences were that all the flow box faces were set as farfield conditions and that the exhaust holes of the engine were set as sources. With the geometry complete, Cobalt® software was used for processing. A boundary condition file was specified, along with a job file, in order to run the solutions1. The temporal accuracy was left as first order, steady state, while the spatial accuracy was set as second order. In order to help the solution stabilize, the CFL value was made small initially and ramped up after several iterations. In the cases involving the rocket engines, the CFL was ramped up slower and the temporal damping coefficient was increased to help the solution stabilize and converge. The only downside to the smaller CFL value was that the smaller the CFL value is made, the longer the solution takes to converge. For the basic file, the reference area for the rocket was set at 5.5990 in2 and the temperature and pressure were set at 530 R and 14.7 psi in the boundary condition file respectively. There was an added boundary condition (source) for

127

Page 150: nasa / ohio space grant consortium 2005-2006 annual student

the rocket engine cases, the exhaust temperature was set as 2442 R and the exhaust velocity was set as Mach 2.0. The last step involved with the solution was importing the results into FieldView® to use as the post processing software. Here the flow was examined for regions of high turbulence and to determine the pressure on the rocket. The governing differential equations used for this project are the Euler (for the inviscid solutions) and Navier-Stokes (for the viscous solutions) equations. In integral form, the steady state form of the equations used by the Cobalt software are as follows1:

∫∫ ⋅++δ

δdnkhjgif ˆ)ˆˆˆ( δδ

dnktjsir ˆ)ˆˆˆ( ⋅++= ∫∫ (1)

where:

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

+

+=

)(

2

ρρρ

ρρ

ρ

epuuw

vuuu

f (2)

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

+

+=

)(

2

ρρ

ρρ

ρρ

epvvwvuvv

g (3)

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

++

=

)(

2

ρρρ

ρρρ

epwwvwuww

h (4)

and

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

=

a

r

xz

xy

xx

τ

ττ0

(5)

⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢

=

b

s

yz

yy

xy

τ

τ

τ0

(6)

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

=

c

t

zz

yz

xz

τ

ττ0

(7)

Here a = uτxx + vτxy+ wτxz + kTx, b = uτxy + vτyy+ wτyz + kTy, and c = uτxz + vτyz+ wτzz + kTz. Results Obtained From the results of the baseline case it was seen that at the low end of the Mach numbers, the coefficient of drag was much higher than the values seen at the upper end of the Mach number range. It was also seen that the coefficient of drag increased when an angle of attack was applied to the rocket. Incorporating a 7.6 mph (Ma = 0.01) cross flow to the Ma = 0.1 case caused about a 4% in the coefficient of drag, while the same cross flow wind for the Ma = 0.6 case increased the coefficient of drag by about 1.5%. As expected, the velocity seen on one side of the rocket was greater than that on the other side. It was also seen that the angle of attack caused an area of high velocity on the aft end of the rocket. Examining the coefficient of drag in more detail over the range investigated, it was seen that the equation relating the Mach number to the coefficient of drag was:

9909.14204.0 −= xy where the value of x represents the Mach number while y represents the coefficient of drag. Figure 1 shows the curve of the inviscid solutions, using the Euler equation, run with no angle of attack. The maximum value of the coefficient of drag was 40.616 at Mach 0.1 while the minimum value was seen to be 1.1332 at Mach 0.6. The high initial drag coefficient can be attributed to a high pressure drag initially. As the Mach number increases, the coefficient of drag decreases as the inertial force overcomes the drag force. The last area of interest was the pressure at the tip of the nosecone. It was seen that the variation in pressure was less than 1% for all the solutions run with inviscid flow. In contrast, the pressure was seen to increase when the solution was run using the turbulent Navier-Stokes equation. Both Mach numbers had a pressure increase, at the tip of the nosecone, of about 5.5%. The results can be seen in Table 1.

128

Page 151: nasa / ohio space grant consortium 2005-2006 annual student

Figures and Tables

y = 0.4204x-1.9909

R2 = 0.9998

05

10

15202530354045

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Mach Number

Coe

ffici

ent o

f Dra

g

Figure 1. Inviscid results with zero angle of attack.

Table 1. Pressure values.

Ma Angle of Attack

Type of

Flow Pfront psi

0.1 0 inviscid 17.9908 0.6 0 inviscid 17.9392 0.1 5.71 inviscid 17.976 0.6 0.955 inviscid 18.0479 0.1 0 viscous 18.986 0.6 0 viscous 18.927

Figure 2. Rocket Engine Geometry and Mesh.

Future Work Due to the complexity of obtaining a fine enough grid behind the rocket engines, the gridding process took longer than was expected. The next step in this project will be to first refine the grid even further for the rocket engine case to obtain accurate results. A finer grid is necessary due to the turbulence from not only the airflow over the rocket but also the exhaust. With the improved grid, solutions should be run at several different protrusion depths, with and without the engines ‘on’. These results should then be compared to the baseline solutions to determine the effect of the engine protrusion on the coefficient of drag. Lastly, solutions should be run varying the angle of attack at the upper and lower end of the Mach number scale to see how much more of an effect the protrusion depth will have on the coefficient of drag. Table 2 indicates the solutions matrix that is intended to be completed by future work.

129

Page 152: nasa / ohio space grant consortium 2005-2006 annual student

Table 2. Future work solution matrix.

Coefficient of drag 0 inch protrusion 0.5 inch protrusion 1 inch protrusion Case Angle of Baseline

(no exhaust)

Exhaust off

Exhaust on

Exhaust off

Exhaust on

Exhaust off

Exhaust on

Ma 0.1 0° 40.616 1.6451 Ma 0.2 0° 10.615 Ma 0.3 0° 4.6302 Ma 0.4 0° 2.606 Ma 0.5 0° 1.6934 Ma 0.6 0° 1.1332 0.39292 Ma 0.1 5.71° 42.223 Ma 0.6 0.955° 1.1526

Acknowledgments The author would like to thank Dr. Jed Marquart, Professor of Mechanical Engineering at Ohio Northern University, for his guidance on this project, and Bill Strang of Cobalt Solutions, LLC, for the model rocket on which the geometry was based and his insight on the project. The author also would like to thank Carolyn Dear at Pointwise, Inc., for her assistance with creating the geometry of the rocket and flow domain, and Kevin Caldwell, senior mechanical engineer at Ohio Northern University, for his assistance with modeling the rocket in ProE Wildfire®. In addition, the author would like to thank Mrs. Mary Roberts, a Technical Services Manager with Estes-Cox, for information pertaining to engine exhaust velocities and temperatures. Last, but not least, the author wishes to thank the Ohio Space Grant Consortium for the sponsorship of this project through their undergraduate scholarships. References 1. “Cobalt 3.0 User’s Manual”, Cobalt Solutions, LLC, Dayton, OH, 2005. 2. “FieldView 11 User’s Guide”, Intelligent Light, Rutherford, NJ, 2005. 3. “Brown, Edwin D. “Model rocket engine performance. Estes Industries Technical Note, Copyright 1978.

130

Page 153: nasa / ohio space grant consortium 2005-2006 annual student

Applications of Ellipses

Student Researcher: Sarah A. Leary

Advisor: Dr. Darrin Frey

Cedarville University Department of Science and Mathematics

Abstract Practically every trigonometry class has a chapter on conic sections. Often students do not understand the usefulness of this chapter. One portion of conic sections is the portion on ellipses. Students learn how to find the vertices, foci, and centers of ellipses. An application of this would be to determine the equation for the orbit of the earth. I will teach a lesson that informs students that planets travel on elliptical paths as opposed to circular paths. With the information we receive from satellites and space probes we can determine the path of each of the planets using the learned formulas. DOMAIN A: PREPARATION Objective(s): The students shall be able to apply their knowledge of ellipses to problems involving the orbits of planets and comets correctly 75 percent of the time. Ohio Academic Content Standards to be met: By graphing the ellipses, the students must specify locations and describe spatial relationships using coordinate geometry. They will also use the Cartesian plane.

In solving the equations for the ellipses, the students must estimate, compute and solve problems involving scientific notation, square roots and numbers with integer exponents. DOMAIN B: ENVIRONMENT Materials: Worksheet and numbered cards DOMAIN C: TEACHING (include time estimates) Motivation: 1 minutes (Anticipatory Set, Introduction, Building Meaning) You may be wondering how ellipses can be used. Today we will be finding the equations for the orbits of planets. The numbers will not be as nice as we have seen in our previous homework, but the numbers will be more realistic. Procedure(s) As students enter the room I will have the draw a card from a box. Each card will have either a 1,2,3, or 4 on it. I will tell them that we will use the cards later on in class. I will start class by answering questions over yesterday’s homework on ellipses. I will then collect the homework assignment. I will then tell the students to divide into groups 1, 2, 3, and 4, depending on which card they drew. Before I hand out the worksheet I will have them take notes. I will give them the terms of aphelion (the furthest point away from the sun in the orbit), perihelion (the closest point to the sun in the orbit), define mean distance as the major axis of the ellipse, and inform them that the sun is a focus and not the center of the orbit. Guided Practice: 20 minutes After this information has been given I will hand out the worksheet with 3 story problems relating to orbits. We will go through the first problem as a class. I will require the students to draw a picture of the orbit to help label the picture and set up the problem. I will ask if there are any questions. Then I will have the students work in their groups to complete the 2 story problems. I will walk around the room, monitor progress, and answer questions. Closure: 1 minute

131

Page 154: nasa / ohio space grant consortium 2005-2006 annual student

I will collect the assignment at the end of the period. I will point out that ellipses do not always have nice, neat answers, but that they are used in real life situations, such as finding the equation of the orbit given certain distances. Independent Practice The two story problems will serve as independent practice. If a group was unable to finish the problem then the members will have to complete the worksheet independently at home and turn it in tomorrow. Assessment The worksheet will be graded for completion. There will be a quiz give tomorrow over graphing and writing the equation of an ellipse. The students were all able to complete the assignment with the help of their peers and with my assistance. Results and Critique of Lesson The lesson went smoothly. Establishing the terminology was the most important part of the lesson. The students were not familiar with terms such as aphelion and perihelion. Without these terms they would not have been able to solve the problems. Dividing the students into groups was effective for 2 of the 3 classes. One class had several personality conflicts which made group work difficult. Discernment should be used on determining whether group work will be effective for an individual class. The story problems forced the students to apply their knowledge of ellipse. The larger numbers used also challenged them and made them think more extensively. Overall the lesson went well. (Questions from Precalculus with Limits by Aufmann, Barker, and Nation pp. 543-544) Group Names_______________________________________________________ 1. The distance from Saturn to the sun at Saturn’s aphelion is 934.34 million miles, and the distance

from Saturn to the sun at its perihelion is 835.14 million miles. Find an equation for the orbit of Saturn.

2. Venus has a mean distance from the sun of 65.08 million miles, and the distance from Venus to

the sun at its aphelion is 65.48 million miles. Find an equation for the orbit of Venus. 3. Earth has a mean distance of 93 million miles and a perihelion of 91.5 million miles. Find an

equation for Earth’s orbit.

132

Page 155: nasa / ohio space grant consortium 2005-2006 annual student

Waterflooding Ohio’s Berea Sandstone Formation

Student Researcher: Zachary S. Lemon

Advisor: Dr. Benjamin Thomas

Marietta College Petroleum Engineering and Geology Department

Abstract Waterflooding is one of the most commonly used methods of secondary recovery. This method of artificial drive mechanism is typically used when the natural primary recovery becomes insufficient, but can also be used concurrently with the primary recovery (3). Waterflooding is essentially what it says. When the natural reservoir energy becomes insufficient, water is injected into the reservoir to supplement the natural driving mechanisms and basically force the residual oil out of the formation and into a producing well. Since water and oil are immiscible, the water drives the existing recoverable oil from the pore spaces of the rock. Waterflooding is nothing new to the Appalachian basin. In fact, waterflooding was first reported in 1880 when John F. Crall discovered that allowing water to enter the producing formation enhanced the production in off-set wells (5) Project Objectives The scope of this research project is to determine the suitability of Ohio’s Berea sandstone reservoirs for waterflooding operations. The vast majority of Ohio’s Berea sandstone reservoirs are produced using only the formation’s natural drive mechanisms, which in most cases are solution-gas driven. This form of natural drive mechanism typically yields between 5-25% of the initial oil in place. This leaves anywhere from 75-95% of the oil in the ground (1). Methodology Used The first step taken to determine the suitability of implementing waterflood operations was researching the past waterflooding attempts in Ohio. This would provide insight on the way these early waterfloods were operated. Information such as well-spacing, production and injection well completions, injection rates, injection pressures, injection volumes, production rates, and production volumes were obtained by researching Ohio’s oil and gas databases. Berea sandstone core data was also obtained through local operators to determine the reservoir characteristics. The historical waterflood successes and failures were analyzed using information obtained through various articles and journal publications dating back to the early 1940s. The second step of this research is to perform a reservoir simulation using the Computer Modeling Group’s reservoir simulation software. Using the data obtained from Ohio databases and local operators, a three-dimensional reservoir model will be used to history match prior well performance in a specific field, and furthermore, predict future oil production once waterflooding has been implemented. This model can then be used to determine the potential and economic feasibility of waterflooding other Berea sandstone fields throughout Ohio. Results Obtained The results obtained through researching Ohio’s oil and gas databases revealed that waterflooding has rarely been used in the state of Ohio. Waterflooding was legalized in the state on March 31, 1939. Secondary recovery of oil reached its peak in 1942, accounting for 15.9% of Ohio’s daily oil production (5). However, since then the oil recovered through secondary recovery has drastically declined. Table 1 below shows a comparison of Ohio’s waterflood production to that of neighboring states (4).

133

Page 156: nasa / ohio space grant consortium 2005-2006 annual student

Ohio does not have a successful history of waterflooding attempts. General lack of experience was the reason for many waterflood failures. Problems in early Ohio waterfloods occurred with very early water breakthrough, limited injectivity problems and a general lack of response. It was suspected that water was often not confined to oil zones during pilot flood testing because converted nitroglycerin-shot holes were used as injection wells. The injected water may have been lost to bedding planes or another porous zone through fractures created by the shots or through excessive injection pressures. Injection water quality and quantity were both important factors that many operators neglected. Using poor quality water often resulted in plugged injection wells and project failures (4). The most successful waterflooding project in Ohio was performed on the Chatham Field located in Medina County, Ohio beginning in late 1939. This field had an estimated 300 million barrels of oil initially in place. The primary recovery efficiency of this field was only 4%, but after waterflooding was implemented the recovery efficiency increased to nearly 17% (2). The results from the reservoir simulation model have not been completed to date.

Table 1. Primary & Waterflood Oil Production (Appalachian Basin). Primary & Waterflood Oil Production

(1989 Data From Varied Sources)

Oil Production (bbl/day)

Waterflood Production (% of total)

State Primary Waterflood Total Kentucky 6,740 7,860 14,600 53.8

Ohio 27,850 150 28,000 0.5 Pennsylvania 4,400 3,000 7,400 40.5 West Virginia 5,000 1,750 6,750 25.9

Total/Average 43,990 12,760 56,750 22.5

References 1. Weber, L. C. and D. C. Freeman. “The Applicability of Waterflooding in the Appalachian Basin.”

Paper SPE 51088. SPE Eastern Regional Meeting. Pittsburgh, PA. November 1998. 2. Tomastik, Thomas E. “Large Potential Reserves Remain for Secondary Recovery in Ohio.” Oil and

Gas Journal. January 1999. 3. Ahmed, Tarek. Reservoir Engineering Handbook. 2nd Edition. 2001. 4. Blomberg, John R. “Ohio Fields Have Waterflood Potential.” American Oil and Gas Reporter.

November 1994. 5. Blomberg, John R. “History and Potential Future of Improved Oil Recovery in the Appalachian

Basin.” Paper SPE 51087. SPE Eastern Regional Meeting. Pittsburgh, PA. November 1998.

134

Page 157: nasa / ohio space grant consortium 2005-2006 annual student

Reliable Invasive Blood Pressure Measurements Using Fourier Optimization Techniques

Student Researcher: Lily Lim

Advisor: Dr. Bruce Taylor

The University of Akron Department of Biomedical Engineering

Abstract Arterial blood pressure is a basic hemodynamic variable routinely monitored in critical care. Conventionally, it is measured invasively using a fluid-filled catheter-transducer system. However, the dynamic characteristics of the system often result in waveform distortion and inaccurate measurements. The current study proposed and developed a novel method for approximating and reproducing the true blood pressure waveform using numerical optimization technique. To validate the method, the catheter-transducer system with the addition of counter pressure source was simulated and tested with air bubbles of five sizes at five locations in the system. The method was successful in reproducing the true blood pressure waveforms regardless of variations in the system characteristics and changes in the system over time. The method developed for this research provides more accurate measurements than contemporary catheter-transducer systems. Introduction Arterial blood pressure is a basic hemodynamic variable monitored in the intensive care unit. It is commonly measured using fluid-filled catheter-transducer systems to provide real-time and continuous pressure monitoring. Arterial pressure measurements are often used to identify pathophysiologic abnormalities and to guide therapeutic interventions in critically ill patients. Accordingly, accurate measurements are crucial to avoid misdiagnosis and mismanagement. The catheter-transducer system used in critical care settings, as a first approximation, behaves as a second-order underdamped [1]–[9]. It can be expressed mathematically by a second-order differential equation with characteristics determined by the compliance, inertance, and resistance of the system:

122

22

2

PPdt

dPRCtd

PdIC =++ (1)

where I is the inertance, C is the compliance, and R is the resistance of the catheter-transducer system. P1 is the output pressure signal measured by the transducer, P2 is the driving pressure at the intravascular tip of the catheter, and t is time. The parameters, R, L, and C, defined the natural frequency, Fn, in Hertz, and damping coefficient, ζ, of the catheter-transducer system, which indicate the adequacy or fidelity of the system, in equations (2) and (3):

IC2π

1Fn = (2)

IC

2Rζ = (3)

An inadequate system, due to its low natural frequency, may result in waveform distortion and erroneous measurements. As an underdamped system, the catheter-transducer system tends to record falsely high systolic pressures and low diastolic pressures. Conversely, overdamping results in falsely low systolic pressure and high diastolic pressure readings [2]–[3].

135

Page 158: nasa / ohio space grant consortium 2005-2006 annual student

There are several factors that could lead to poor dynamic responses. Air bubbles in the tubing system are one of the most frequent sources of error in arterial pressure monitoring. Air bubbles often damp the propagation of the mechanical signal, causing a distorted arterial waveform and erroneous pressure readings [1]–[2], [5]–[6]. Other factors that may alter the adequacy of a monitoring system include (a) long, narrow, and compliant pressure tubing, (b) overly compliant diaphragm in the pressure transducer, (c) presence of additional stopcocks, and (d) loose connections [1], [5]–[6]. The dynamic characteristics of the available catheter-transducer systems may be different due to variation in setup procedures, which result in different hemodynamic pressure measurements even though the patient’s condition has not changed [10]. With an improperly prepared or inadequately functioning monitoring system, not only the actively measured hemodynamic indices but also any derived variables will be erroneous, potentially invalidating the entire hemodynamic profile of the patient. Consequently, wrong clinical decisions can or may be made resulting inappropriate treatment for the patient. Project Objectives Accurate measurements of invasive blood pressure have always been desirable in critical care. However, the nature of the measurement system has prevented the expectation in most cases. Previous studies [1], [5], [11] have shown the necessity of examining the adequacy of the system over time, adding tedious procedures and attention for the critical care providers in continuous pressure monitoring. In this study an optimized, Fourier-based correction algorithm was used to approximate and reproduce true blood pressure waveform. The proposed method assumed that the catheter-transducer system can be modeled by a second order dynamic system [1]–[9]. The equivalent electrical circuit of such a system is given in Figure 1. In the circuit, the voltage source is analogous to the pressure source, the resistor is analogous to the fluid resistance, the inductor is analogous to the fluid inertance, the capacitor is analogous to the compliance of the system, and the current is analogous to the fluid flow in the system [12]–[13]. Applying Kirchhoff’s Voltage Law,

( ) ( ) ( ) ( ) 0V dt tiC1

dttdiLtRitV C(0)in =−−−− ∫ . (4)

Suppose that the current in the circuit is zero, then ( ) 0VtV C(0)in =− . (5) Rearranging the equation, ( ) C(0)in VtV = . (6) If the current is zero, the input voltage can be determined from the voltage of the capacitor. In the catheter-transducer system, this theory implies that the blood pressure measured by the transducer will equal to the true blood pressure if the fluid in the system is not in motion. In order to drive the current to zero, a counter voltage source, Vctr, is added in series with the capacitor as shown in Figure 2. The appropriate counter voltage is generated from Fourier coefficients that are manipulated iteratively by an optimization algorithm. The iterations continue until the Fourier coefficients that are used to generate the counter voltage source result in the current to be virtually zero. As a consequence, the output voltage, Vc, will closely approximates the input voltage, Vin. The ultimate goal of this method was to minimize the errors due to variations in assembly technique and time-dependent changes in the system, allowing physicians to obtain more accurate measurements. The purpose of the study consisted of two parts. The first part of the study was to explore the effectiveness of the correction algorithm in reproducing true blood pressure waveforms. Specifically, the optimized

136

Page 159: nasa / ohio space grant consortium 2005-2006 annual student

Fourier coefficients produced by the correction algorithm were compared with the Fourier coefficients of the true blood pressure waveforms and the differences were examined. The second purpose of the study was to determine the factors that affect the capabilities of the correction algorithm. Specifically, different air bubble sizes and locations in the system were tested on the algorithm and the differences in the resulted Fourier coefficients were analyzed. Methodology A simulation model of the catheter-transducer system with a counter pressure source was developed. The characteristics of the system were estimated from an arterial pressure monitoring tubing. The resistance of the system, R, was determined using the following equation:

4πrlµ8

QPR =

∆= (7)

where µ is the fluid viscosity, l is the length of the tubing, and r is the radius of the tubing [14]. The resistance was calculated to be (5.1 ± 0.3) × 109 kg-m-4s-1. The equation used for determining the inertance of the system, L, was given as:

AρlL = (8)

where ρ is the density of the fluid, l is the length of the tubing, and A is the cross sectional area of the tubing [14]. The inertance was computed to be (5.9 ± 0.2) × 108 kg-m-4. The compliance, C, is related to the inertance, L, by the relationship:

2nLω

1C = (9)

where ωn is the natural frequency of the tubing [12], which was determined by investigating the step response of the tubing. The natural frequency was found to be 63 ± 3 rad/s. Using the L calculated previously, the compliance, C, was (4.3 ± 0.4) × 10-13 m3/Pa. Presence of air bubbles in the system alters the system characteristics and may affect the recordings of pressure waveforms. The effects of air bubbles on the proposed method were investigated in this study. The compliances for air bubble sizes of 5, 10, 15, 20, and 25 µl were approximated using information from previous study [12]. The changes in resistance and inertance due to the air bubble locations at 1.5, 1.2, 0.9, 0.6, and 0.3 meters from the transducer end were determined using equations (7) and (8), respectively. An algorithm using nonlinear least-squares fit was developed to obtain the optimal Fourier coefficients by minimizing the differential pressure, Vc. Simulations were performed using three input blood pressure waveforms. For each waveform used, the simulation was repeated with five sizes of air bubbles, each at five locations in the system. For comparison, similar simulations were performed using a conventional catheter-transducer model. The Fourier coefficients of the true, optimized, and uncorrected pressure waveforms were obtained. The differences between the Fourier coefficients of the optimized and true waveforms, the uncorrected and true waveforms, and the optimized and uncorrected waveforms were determined. Two-factor ANOVA was conducted for each cases at α-level of .05. Results Figure 3 shows the blood pressure waveforms produced from the conventional catheter-transducer model on the left column and the Fourier-based optimization technique on the right column. The input pressure waveforms were also shown in the figure for comparison as the air bubble located at the transducer end

137

Page 160: nasa / ohio space grant consortium 2005-2006 annual student

increases in size. Distortions can be clearly observed on the uncorrected waveforms. Conversely, the optimized waveforms were superimposed on the true pressure waveforms. The mean and standard deviation of the differences in the Fourier coefficients between the optimized and true waveforms, the uncorrected and true waveforms, and the optimized and uncorrected waveforms were presented in Table 1. It should be noted that the differences in the coefficients between the optimized and true waveforms have a mean closer to zero and consisted of less variation as compared to those between the uncorrected and true waveforms. Statistical analyses investigated the effects of location and size of air bubble, and the interaction between the location and size on the Fourier coefficients. The results from the analyses were presented in Table 2. It was found that there was no difference in the Fourier coefficients between the optimized and true blood pressure waveforms across all the effects. With respect to the comparison between the uncorrected and true pressure waveforms, there was no difference in the Fourier coefficients due to the air bubble size and the interaction between the air bubble location and size. However, the coefficients were found to be statistically different due to the air bubble location. Similar results were found in the comparison between the Fourier coefficients of the optimized and uncorrected pressure waveforms. Significance and Interpretation of Results The major disadvantage of invasive blood pressure measurement is the inaccuracy resulting from the dynamic characteristics of the pressure monitoring system. In the current study, a novel method was developed to provide more accurate blood pressure readings by minimizing errors due to variations in system characteristics and time-dependent changes. Results from this study have showed that the Fourier-based optimization technique was capable of approximating and reproducing the true blood pressure waveforms regardless of the system characteristics as well as changes in the system over time. The results also showed that the uncorrected pressure waveforms produced by conventional catheter-transducer systems were influenced by the system characteristics and were susceptible to changes introduced into the system. Statistical results suggested that the distortions in pressure waveforms were affected by the location of air bubble in the system. However, the variations of air bubble sizes were found to have no effect in waveform reproduction, which contradicted findings from previous studies [1]–[3], [5]–[6], [15]. This may be due to the fact that the air bubble sizes used in this study did not cause sufficient changes to the system characteristics. The comparison between the Fourier coefficients of the optimized and uncorrected waveform indicated that the proposed technique provided improvement in producing undistorted, and hence more accurate, pressure waveforms compared to the conventional pressure measurement system. A significant implication of the current method was its capability to approximate and reproduce true blood pressure waveforms that are not influenced by the dynamic characteristics of the catheter-transducer system. Unlike previous studies involving blood pressure measurement correction [7]–[8], this technique was also not affected by other sources of error introduced into the system, such as air bubbles. While this study yielded preliminary quantitative information regarding the effectiveness of the technique through simulation, more detailed studies are necessary to determine its performance in real-time monitoring. Given that the correction method is implemented through numerical optimization, the routine requires time for the errors to converge in order to produce the optimal Fourier coefficients. For this reason, the corrected pressure waveform would not be displayed instantaneously. Nevertheless, the amount of time needed for the optimization to converge is related to the initial values used for the Fourier coefficients; i.e. if the initial values are close to the solution, then the optimization will converge quicker and vice versa. Hence, it is anticipated that if a set of initial values with more realistic values that represent a typical blood pressure waveform (as opposed to random numbers used in this study) are used, the optimization algorithm will converge in shorter time period. However, the receipt of more accurate blood pressure readings in exchange for slightly longer computation times would appear to be a favorable tradeoff.

138

Page 161: nasa / ohio space grant consortium 2005-2006 annual student

In the current study, 16 Fourier coefficients were specifically chosen for the purpose of resynthesizing the true pressure waveform because any high-demanding pressure waveform can be reconstructed with high fidelity using up to the fifteenth harmonic of the fundamental frequency [16]. However, some studies [2], [10], [17] have shown that a typical blood pressure waveform can be sufficiently reconstructed with only six to ten harmonic. If the number of Fourier coefficients used in the algorithm is reduced, the time necessary for the optimization to converge is speculated to decrease because the coefficients are the variables being manipulated by the algorithm. Accordingly, a comparison study to determine the effects of a reduced number of Fourier coefficients on the performance of the method with respect to accuracy and computation time is recommended. The performance of the proposed technique with respond to variability of pressure waveforms in a continuous beat-to-beat manner was not investigated in the current study. Since Fourier transformation (or more precisely, the inverse of Fourier transformation) is employed in the correction technique, a continuous blood pressure signal would need to be segmented into one cardiac cycle before it is passed in to the optimization algorithm. The corrected pressure waveform of one cardiac cycle would then be reconstructed and displayed. A further development of signal processing software is necessary to execute the processes of segmenting, reconstructing, and displaying the continuous blood pressure signal. Although the lack of an immediate display of the corrected pressure signals is a limitation, the accuracy provided by this method may prove to be advantageous where other hemodynamic variables, such as cardiac output, are derived from the measured arterial pressure. If other variables are derived from an inaccurate pressure measurement, these variables would be erroneous and may lead to inappropriate resuscitative treatment for the patient. The proposed correction method has also demonstrated its effectiveness independent of the changes in the pressure measurement system. The application of this method in the critical care settings may lessen the task load of the critical care nurses because the effort and time to periodically troubleshoot and inspect for system adequacy would be lessened. In conclusion, a Fourier-based optimization technique for approximating and reproducing true pressure waveforms in invasive blood pressure measurements was developed. A practical pressure monitoring system that provides reliable and accurate readings is possible by combining this method with an adapted real-time signal processing software. Using such a system, accurate measurements can be attained in the critical care settings, avoiding misinterpretation of diagnosis and ultimately improving the quality of patient care. Tables and Figures

Table 1. Descriptive statistics of the data. Model Mean Standard Deviation Optimized vs. True -0.006 0.066 Uncorrected vs. True -0.126 2.995 Optimized vs. Uncorrected 0.119 2.989

Table 2. ANOVA results. Optimized vs. True Uncorrected vs. True Optimized vs. Uncorrected Source Mean Square F-Value Mean Square F-Value Mean Square F-Value Location 0.007 1.61 41.559 94.39* 41.163 89.49* Size 0.001 0.33 0.509 1.16 0.574 1.25 Location*Size 0.001 0.29 0.510 1.16 0.512 1.11 Note: * p<.001

139

Page 162: nasa / ohio space grant consortium 2005-2006 annual student

Figure 1. The equivalent electrical circuit of a catheter-transducer system.

Figure 2. A counter voltage source added to the electrical model of a catheter-transducer system.

140

Page 163: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. The blood pressure waveforms produced from the conventional catheter-transducer model are shown on the left column (solid line) and the Fourier-based optimization technique on the right column (solid line). The input pressure waveforms were shown as dashed line for comparison as the air bubble located at the transducer end increases in size.

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP, m

mH

g

15ul air bubble at transducer end

True BP Uncorrected BP

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP, m

mH

g

10ul air bubble at transducer end

True BPUncorrected BP

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP, m

mH

g

5ul air bubble at transducer end

True BPUncorrected BP

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP, m

mH

g

5ul air bubble at transducer end

True BPOptimized BP

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP, m

mH

g

10ul air bubble at transducer end

True BPOptimized BP

0 0.1 0.2 0.3 0.4 0.5 0.660

80

100

120

140

160

180

Time, sec

BP,

mm

Hg

15ul air bubble at transducer end

True BPOptimized BP

141

Page 164: nasa / ohio space grant consortium 2005-2006 annual student

References 1. Gardner, Reed M, and Hollingsworth, Karen W. “Optimizing the electrocardiogram and pressure

monitoring.” Critical Care Medicine 14 (July 1986): 651-658. 2. Gardner, Reed M. “Hemodynamic Monitoring: From Catheter to Display.” Acute Care 12 (1986): 3-33. 3. Gardner, Reed M. “Direct Blood Pressure Measurements – Dynamic Response Requirements.”

Anesthesiology 54 (1981): 227-236. 4. Kleinman, Bruce, Powell, Steven, Kumar, Pankaj and Gardner, Reed M. “The Fast Flush Test Measures

the Dynamic Response of the Entire Blood Pressure Monitoring System.” Anesthesiology 77 (1992): 1215-1220.

5. Gibbs, Nancy C. and Gardner, Reed M. “Dynamics of invasive blood pressure monitoring systems: Clinical and laboratory evaluation.” Heart & Lung 17 (January 1988): 43-51.

6. Gardner, Reed M. and Chapman, Radene H. “Trouble-Shooting Pressure Monitoring Systems: When do the Numbers Lie?” In Cardiopulmonary Critical Care Management, edited by R. J. Fallat and J. M. Luce. New York: Churchill Livingstone, 1988.

7. Lambermont, B., Gerard, P., Detry, O., Kolh, P., Potty, P., D’Orio, V. and Marcelle, R. “Correction of pressure waveforms recorded by fluid-filled catheter recording systems: A new method using transfer equation.” Acta Anaesthesiologica Scandinavica 42 (1998): 717-720.

8. Wellnhofer, Ernst, Combé, Volker, Oswald, Helmut and Fleck, Eckart. “High Fidelity Correction of Pressure Signals from Fluid-filled Systems by Harmonic Analysis.” Journal of Clinical Monitoring and Computing 15 (1999): 307-315.

9. Fry, Donald L., Noble, Frank W., and Mallos, Alexander J. “An Evaluation of Modern Pressure Recording Systems.” Circulation Research 5 (January 1957): 40-46.

10. Henneman, Elizabeth A. and Henneman, Philip L. “Intricacies of blood pressure measurement: Reexamining the rituals.” Heart & Lung 18 (1989): 263-271.

11. Promonet, Claude, Anglade, Daniel, Menaouar, Ahmed, Bayat, Sam, Durand, Michel, Eberhard, André and Grimbert, Francis A. “Time-dependent Pressure Distortion in a Catheter-Transducer System.” Anesthesiology 92 (2000): 208-218.

12. Taylor, Bruce C., Ellis, David M. and Drew, Jeffrey M. “Quantification and Simulation of Fluid-filled Catheter/transducer Systems.” Medical Instrumentation 20 (1986): 123-129.

13. Taylor, Bruce C. “Frequency Response Testing in Catheter-Transducer Systems.” Journal of Clinical Engineering 15 (1990): 395-406.

14. Peura, Robert A. “Blood Pressure and Sound.” In Medical Instrumentation: Application and Design (3rd Ed.), edited by J. G. Webster. New York: John Wiley & Sons, Inc., 1998.

15. Grossman, William. “Pressure Measurement.” In Cardiac Catheterization, Angiography, and Intervention (5th Ed.), edited by D. S. Baim and W. Grossman. Baltimore: Williams & Wilkins, 1996.

16. Geddes. Leslie A. The Direct and Indirect Measurement of Blood Pressure. Chicago: Year Book Medical Publishers, 1970.

17. Gardner, Reed M. “System Concepts for Invasive Pressure Monitoring.” In Critical Care, edited by J. M. Civetta, R. W. Taylor and R. R. Kirby. Philadelphia: Lippincott, 1988.

142

Page 165: nasa / ohio space grant consortium 2005-2006 annual student

Flexible-Joint Mechanism for Space Applications

Student Researcher: José F. Llapa

Advisor: Dr. Paul P. Lin

Cleveland State University Mechanical Engineering Department

Abstract Solar energy is used in many applications and it has also been the only means of providing electricity to satellites and the International Space Station. However, the solar energy conversion efficiency is relatively low. Solar collector panels face the sun for only about half a day making it necessary for the solar collector panels to be very large, which is undesired. To address this problem in space, a new concept called Energy Transformation Technology is being considered by NASA. The basic concept is to use series of flywheels to store the energy collected by the solar panels. A flywheel under consideration by NASA is about the size of a typewriter. The objective of this project is to develop a 3D mechanism that can function as a flexible joint that sustains large translation and rotational displacements and at the same time softly connect two flywheel modules without interference of their independent motions. The 3D mechanism has been designed and is currently being animated. A real model will be built from commercially available parts and specially designed parts using a rapid prototyping machine. Project Objectives The project objectives are to design a 3-D mechanism consisting of two plates and connected with piston-cylinders that can sustain sets of flywheels on the plate. The design will then be animated using Cosmos Software. Any necessary changes to the design will be made and finally a working model will be built using commercially available parts and others made at school. Methodology The project’s main objective is to design a mechanism that provides maximum displacement and rotation between two plates. To provide vertical displacement between plates air cylinders need to be used and at the same time these will provide cushion. When a normal load is applied to one of the plates the cylinders will displace therefore joints connecting the cylinders and plates need to be used. The working model needs to be built from light yet strong and economical material. Results Obtained The mechanism in Figure 1 was built and is being simulated. The simulation consists of adding loads of different size in random locations of the assembly. The mechanism job is to safely react to these applied forces and get back to its normal initial position. The simulation resembles actual behavior in space. In space it is unexpected of what forces are acting or in what direction therefore the plates and cylinders should be strong and fast to handle abrupt loads and at the same time be able to return to its normal position which is provided by the piston cylinders connecting the two plates. Figures 2 to 5 represent the behavior of the full assembly when being simulated. It can be seen that the initial (Figure 2) and final (Figure 5) positions of the assembly are the same which means that the mechanism was successful in reorganizing after random loads were applied. This is particularly possible due to the internal forces provided by the piston cylinder. In Figure 3, a force pushes the top plate of the assembly and another force (Figure 4) acting parallel to the plates’ surfaces causes the top part to rotate. The piston cylinders counteract the applied forces and bring the assembly back to its normal initial position as seen in Figure 5.

143

Page 166: nasa / ohio space grant consortium 2005-2006 annual student

Conclusion The assembled mechanism responds well to unexpected small loads; however, it will disintegrate if abrupt big loads are applied close to the edges of the plates as shown in Figure 6. The mechanism also seems to provide better reaction force when loads are applied normal to the top plate. CosmosWorks has been used to obtain stress and displacement results of the mechanism. The maximum displacement and load that this mechanism can sustain is currently being investigated. A real model of the mechanism will be built. Figures

Mechanism ComponentsMechanism ComponentsFlywheels support plate

Upper universal joint

Sliding rod

Cylinder Main Body

Lower UniversalJoints

Mechanism bottom plate

Figure 1. Exploded view of Flexible-Joint Mechanism Assembly.

Figure 2. Initial position of assembly. Figure 3. Force applied at top plate causes displacement.

Figure 4. Another force acting on top plate Causes Assembly to rotate.

Figure 5. Piston cylinders allow the assembly to get back to initial position.

Figure 6. Abrupt forces cause the assembly to disintegrate.

144

Page 167: nasa / ohio space grant consortium 2005-2006 annual student

Geometry and Rockets

Student Researcher: Allison S. Mackay

Advisor: Dr. Tena Roepke

Ohio Northern University Mathematics Education

Abstract Modern Educational thought stresses the necessity of applying concepts, which students are learning to the world around them. In addition a heavy emphasis has been placed on meeting the needs of the diverse learners who make up today’s secondary school classrooms. The complicated structure of rockets which are launched into space can be thought of in their simplest form as a series of geometric shapes. This lesson seeks to provide a practical application for area and volume of these shapes, as well as demonstrating the construction of three dimensional structures from two dimensional figures. Students will visually see these concepts as they build their own pop rockets. Objectives and Standards This lesson is meant to be supplemental activity to aid in student understanding of the concepts of area and volume as stated in the 8th grade Measurement Standard Number 9. This activity, while beneficial for all students is especially an effective tool for students who possess what Howard Gardner refers to as visual/spatial intelligence (Gardner’s Theory of Multiple intelligences). This activity will engage students in active learning and give them an opportunity to see a concrete example of measurement formulas. In addition it expands students’ knowledge of the ways in which shapes and areas appear in the real world and the world of science. Finally, it is a fun activity which should capture the students’ interest as they are learning. Materials and Procedure Materials needed for this lesson include construction paper, tape, film canisters, antacid tablets, scissors, compasses, and rulers. In addition the launching of the rockets should be done outside in an open space. The students should be divided into groups of two to four. The procedure is as follows: Creating the Rocket 1. Have students construct and cut 1 large rectangle, 4 equivalent trapezoids and 1 circle using the

compass and straight edge. 2. Review area formulas for rectangles, circles, and trapezoids and have students use the ruler to

calculate their measurements. 3. Next have students roll the rectangle into a cylinder and tape it around the film canister to form the

base of the rocket. Then measure the volume of the resulting cylinder. (See picture)

4. Tape the four trapezoids near the bottom of the cylinder to form the fins of the rocket.

145

Page 168: nasa / ohio space grant consortium 2005-2006 annual student

5. Remove one fourth of the circle with the scissors and have students calculate the new area (3/4 area of the original circle)

6. Next, roll it into a cone shape and calculate the volume of the resulting cone. Finally, attach it to the top of the cylinder.

Launching the Rocket (quoted from: http://www.nasa.gov/audience/forkids/activities/A_Pop_Rockets.html) 1. Put on safety goggles. 2. Turn the rocket upside down and fill the film canister one-third full of water. (Work quickly on the next steps) 3. Drop in one-half of the fizzy tablet. 4. Snap the lid on tight. 5. Stand the rocket on a flat surface. 6. Stand back and watch it launch! 7. Have a contest to see which students’ rocket goes the highest.

Images above: Credit: NASA (http://www.nasa.gov/audience/forkids/activities/A_Pop_Rockets.html) Assessment Assessment for this activity could be implemented in a variety of ways. It would be beneficial for students to write some sort of journal or response detailing what they learned through the process. This would help the teacher with further review of the content and extension of the activity. In addition assessment can involve measuring the accuracy of the students’ calculations as well as adherence to directions, and the final product. Further extension of this activity could involve discussion and experimentation of the relation between the areas of two dimensional areas and three dimensional volumes. In addition, research regarding the use and development of various shapes and solids in rocket engineering could be explored. Critique and Conclusion Building geometric rockets is a good way for students to see geometry appearing in a real life setting. This brings both meaning and motivation to student learning. Students could be encouraged to develop further research on related topics which interested them. One drawback is the time which this activity could take depending on the ages and abilities of the students. In addition, adequate materials and space to launch the rockets would be required. The setup and implementation of this lesson are rather simple, however, and could be used effectively as part of a test review or a reinforcement activity. Overall, this activity is a good tool to aid students in solidifying their knowledge of geometric formulas.

146

Page 169: nasa / ohio space grant consortium 2005-2006 annual student

Microbial Degradation of Petroleum Hydrocarbons

Student Researcher: Wilbert E. Meade

Advisor: Krishnakumar Nedunuri

Central State University Department of Water Resources Management

Abstract This research was part of an overall investigation aimed at determining optimal agronomic practices for successful clean up of industrial petroleum hydrocarbons using native grasses of Ohio. The contaminated soil was obtained from an industrial waste site in southwest Dayton. The six grass species, Canada Wild Rye, Prairie Brome, Indian Wood Oats, Indian grass, Side-oats Grama, and Switch grass were placed in layered soil in 5-gal buckets in a random block design with each species subjected to three soil amendments involving compost addition, fertilizer addition, a combination of compost and fertilizer addition, with topsoil as a control. Maximum reductions in TPH concentrations were observed in treatments carrying Side-oats Grama in the presence of compost (8 ppm) and also in treatments carrying Canada Wild Rye subjected to compost and fertilizer addition (16 ppm). Treatments carrying these species in the presence of plain topsoil had TPH concentrations of 307 ppm and 187 ppm respectively. Both grasses exhibited extensive growth in the above ground biomass and root biomass when compost was added to the contaminated soil. Microbial growth and diversity from these treatments was investigated using both plate counts and molecular techniques. The isolated bacteria degraded common organic contaminants such as diesel fuel, phenanthrene and chlorinated pesticide 2,4-D. This is indicative of the presence of hydrocarbon degraders in the contaminated soil treated with Canada Wild Rye and Side-Oats Grama using compost. Molecular analysis of bacterial rDNA extracted from these treatments using PCR amplification of 16S rDNA showed microbial diversity in the region V3 which is indicative of the microbial diversity. Project Objectives The objectives of the research are:

1. To enumerate the bacteria isolated from the rhizosphere of two grasses Canada Wild Rye and Side-Oats grama grown on soils contaminated with petroleum hydrocarbons. Compost was added as a soil amendment.

2. To test their ability to degrade common organic contaminats such as diesel fuel, phenanthrene and 2,4-D.

3. To investigate the diversity of these degraders using 16S rDNA genes from a hyper variable region known as V3.

Methodology The bacteria were separated from the soil particles and organic matter. The resulting extracts were collected on black polycarbonate filters using filtration to remove excess liquid. The soil extracts were stained with acridine orange, which binds to DNA and fluoroscences when excited by blue light and cells give off a green color. Bacterial numbers were counted using the fluorescent microscopy. Enrichment cultures were prepared by taking the contaminated soil from each treatment, adding mimimal salts media and a typical organic contaminant (diesel fuel, phenanthrene and 2,4-D ) to isolate organisms in the soil that could use these complex hydrocarbons as carbon sources. The bacteria obtained from this enrichment technique was tested for its ability to degrade diesel fuel, phenanthrene, and 2,4-D. These contaminants were sprayed onto the plates containing the enriched bacteria and placed in an incubator for 48 hours. Upon visual examination, the plates showed obvious consumption of the individual contaminants. Then, bacteria grown on diesel fuel was streaked onto two new Petri dishes containing the media and sprayed with other contaminants phenanthrene and 2,4 -D respectively. Similarly, the bacteria grown on phenanthrene and 2,4-D were exposed to other two contaminants, using standard spray plating techniques.

147

Page 170: nasa / ohio space grant consortium 2005-2006 annual student

The DNA from the bacteria obtained from these different treatments were extracted, amplified using polymerase chain reaction (PCR), and the PCR products were subjected to denaturing gradient gel electrophoresis (DGGE). This procedure is based on the amplification of the 16S rDNA gene of a region know as V3 (1). Bacterial rDNA genes at this location are different generally down to the genus and species level making it a convenient way to look at diversity (2). Results and Discussion The plate counts were difficult to obtain due to excessive growth that occurred, making it difficult to distinguish between individual colonies. The isolated bacteria were found to have the ability to use diesel fuel, phenanthrene and 2,4-D as substrates. This is indicative of potential hydrocarbon degraders in the sludge.

Figure 1. Gel picture of 16S rDNA with the base pair ladder. Figure 1 shows the gel picture of 16s rDNA products obtained from the DNA of bacteria isolated from different treatments. Lane 1 corresponds to Canada Wild Rye with topsoil, lane 2 is Canada Wild Rye with a combination of compost and fertilizer, lane 3 is Side-Oats grama with top soil, lane 4 is Side-Oats Grama with compost, lane 5 is treatment without grass, and lane 6 is the positive control for the bacteria, and lane 7 is the 100 base pair ladder. Results show that all treatments except the one without grass demonstrated the presence of bacterial rDNA genes belonging to the V3 region thus reflecting bacterial diversity. Further work is necessary to identify the specific oligonucleotides present and to establish the phylogenic relationships between different bacterial communities. Acknowledgments This work is partially supported by the US EPA National Center for Environmental Research STAR Grant RZ 831072. We thank Dr. Jodi Shann and Dr. Sabrina Mueller (University of Cincinnati, Department of Biology) for helpful discussions and assistance with the molecular work. References 1. Rademaker, J.L.W., and F.J.D. Bruijn 2000, posting date. Characterization and classification of

microbes by Rep-PCR Genomic fingerprinting and computer assisted pattern analysis. [Online.] 2. Versalovic, J., T. Koeuth, and J.R. Lupski. 1991. Distribution of repetitive DNA sequences in

eubacteria and application to fingerprinting of bacterial genomes. Nuclei acids research 19: 6823-6831.

148

Page 171: nasa / ohio space grant consortium 2005-2006 annual student

Electrodynamic Tethers for Space Propulsion

Student Researcher: Evin L. Miller

Advisor: Dr. James Bighouse

Terra Community College Arts and Sciences

Abstract The cost of keeping the International Space Station in orbit is an especially large burden for the space program. Velocity lost because of drag forces from sparse atmospheric particles cause steady decline in the altitude of the ISS. This loss of velocity is significant enough that gravity eventually tows the satellite back to Earth. If additional thrust, conventional chemical propellant, is applied tangentially to its orbital path, the satellite’s altitude will increase. Chemical propellant incurs at least two giant costs on the space program: the cost of fuel to keep the space station in orbit and the cost of launching that massive fuel to the ISS (more fuel). I researched one alternative form of propulsion, which could reduce our dependence on conventional propellants and the costs associated with them by utilizing the Earth’s magnetic field and ionosphere as the means for thrust. By attaching an electrodynamic tether propulsion system to the International Space Station, it can harness electromagnetic thrust. An electrodyanic tether is composed of two masses connected by a conductive material. The resulting tether system will align itself toward the center of Earth’s mass once in orbit. The ensemble is connected to a battery, which creates potential difference and draws current from the ionosphere that opposes the tether’s naturally induced emf. As the system draws electrons up the length of the tether, moving charged particles within the tether exert force against the magnetic field perpendicular to its current. Force experienced along the tether propels the system tangential to its orbital path thereby expanding its altitude. Orbit expanding thrust produced from electrodynamic tether propulsion should be enough to overcome the drag forces acting on the ISS. More importantly, the thrust used in this system will cost much less than that of chemical propellants or other concepts in the long run. Project Objective My objectives are to find an explanation for the forces resulting from an electrodynamic tether propulsion system and the feasibility of such a system in keeping the International Space Station in orbit. Methodology Used As I was not at all familiar with the subject of my research, I began with the very basics. My first look at electrodynamic tethers was an overview of all of their potential uses in space. I refined my research to textbooks so to cover all of the physics involved on an elementary level while using an online scientific resource database to clarify technical terms. To demonstrate my comprehension of electrodynamic tether propulsion, I drew a model to summarize the significant forces involved in the concept. I concluded my research by reading several NASA documents, especially from the Propulsive Small Expendable Deployer System mission, which detailed more specifics of design and feasibility of the system. Results Obtained The design of the electrodynamic tether system works under principles from several areas of physics to provide the means for thrust in low Earth orbit. A 10 kilometer propulsive EDT’s movement through the Earth’s magnetic field and ionosphere produces .5 to .8 Newtons of force, which is more than enough to overcome the drag forces that the International Space Station experiences everyday. Cumulative thrust produced is competitive against conventional propellants, but the EDT exhibits its major advantage through the comparatively tiny amount of energy and mass needed to power its means of thrust. A solar array could generate power for the EDT’s battery, which would cut the space station’s dependence on expensive chemical fuels. NASA stands to save up to 2 billion dollars mainly from avoided fuel transportation costs over a 10-year period. There are other potential benefits of EDT space propulsion:

149

Page 172: nasa / ohio space grant consortium 2005-2006 annual student

avoiding the use of propellants that contaminate the space environment, providing a more flexible research environment, and refocusing International Space Station resupply flights towards maintenance instead of refueling. The use of EDT propulsion gives the ISS an autonomous energy and thrust source so that the space program can focus on the International Space Station instead of keeping it in orbit. Significance and Interpretation of Results Any organization that decides to use electrodynamic tether propulsion technology alleviates itself from the costs of periodic refueling. The American public sector benefits because NASA will have less of their budget tied up in refueling missions and more available for research and results. Once this technology diffuses into the economy, the private sector will benefit from expense reduction on commercial satellites. These results also open the door for new EDT research on power generation in any planet’s orbit with a magnetic field and ionosphere. Probes launched with this EDT technology could generate from sources both near and far to power each probe’s mission tools. We could see all of this technology in operation once issues, like how to insure tether integrity and how to power propulsive EDTs’ potential difference during unlit hours, are resolved. Electrodynamic tether propulsion is truly a significant advancement in the scientific endeavor because it enables us to make better use of available resources. References 1. Enrico Lorenzini and Juan Sanmartín, “Electrodynamic Tethers in Space.” Scientific American

Aug. 2004: 52-57. 2. Giancoli, Douglas C. Physics: Principles with Applications Fifth Edition. Ed. Paul F. Corey, et al.

New Jersey: Prentice Hall, 1997. 3. United States. National Aeronautics and Space Administration. International Space Station:

Electrodynamic Tether Reboost Study. Alabama: Marshall Space Flight Center, 1998. 1 Apr. 2006 <http://trs.nis.nasa.gov/archive/00000439/01/tm208538.pdf>.

150

Page 173: nasa / ohio space grant consortium 2005-2006 annual student

Control of High Speed Cavity Flow Using Plasma Actuators

Student Researcher: Douglas A. Mitchell

Advisor: Professor Mohammad (Mo) Samimy

The Ohio State University Department of Mechanical Engineering

Abstract At the Gas Dynamics and Turbulence Laboratory (GDTL), which is located next to Ohio State’s Don Scott Airport, plasma actuators are being developed which are capable of producing high amplitude and high frequency actuation. These actuators are placed along the leading cavity edge and are capable of influencing the separating shear layer. A cavity model was designed and fabricated to attach to a converging rectangular nozzle operating in a free jet facility at GDTL. Plasma actuators are installed on the leading edge of the cavity to create pressure perturbations. Kulite pressure transducers are embedded in the floor of the cavity to detect the pressure fluctuations within the cavity from the flow. The cavity nozzle extension was tested from Mach 0.50 to 0.75. At higher subsonic velocities, the combination of the separating shear layer and the cavity geometry produces a choke point downstream of the cavity in the flow. This research project will be used to determine the feasibility of exploring further use of plasma actuators in controlling high speed flow over open cavities and its ability to attenuate the amplitude of the pressure fluctuations. Project Objectives One of the new technologies being integrated into advanced fighter aircraft is stealth. In the older fighters, weapons were exposed under the wings or housed externally; but, in the new planes using stealth, this is not an option. Therefore, the fighters are being designed with internal weapons bays that will only be opened during flight. Unfortunately, when the bay door is opened, air over the aircraft separates from the leading edge of the cavity creating shear layers, which interact with the trailing edge of the cavity producing strong pressure fluctuations, sometime with pure tones known as cavity tones. These fluctuations are generated due to the coupling of the shear layer and cavity acoustic, which is constructive and self-sustaining in nature (Samimy et al. 2004). Closed-loop active control has been used to effectively reduce pressure spikes within a cavity in low subsonic flows. Synthetic jet, or compression driver, type actuators have been used for this purpose but are incapable of producing high amplitude and high frequency actuation necessary for control of high subsonic, transonic, and supersonic flows (Debiasi and Samimy 2004). This research project involves the design and construction of a cavity model as a nozzle extension. The pressure perturbations are measured using two Kulite pressure transducers located in the center and offset of the cavity floor. This data will then be used to determine the baseline case for the cavity facility. Once the baseline velocity is established, plasma will be generated to determine its effect on the separating shear layer and its ability to reduce the amplitudes (sound pressure level, dB) within the cavity. Methodology In order to study the pressure perturbations within the flow, a cavity nozzle extension was designed which attaches to a 0.50” high by 1.50” wide rectangular nozzle. The extension employs a clamshell type design that enables it to fit tightly around the nozzle. Downstream from the nozzle is the plasma actuator housing insert. The insert is made of a machinable boron nitride ceramic, which is a capable of resisting high temperatures without degrading or eroding from the plasma discharge. The 1.0 mm diameter steel electrodes used to generate the plasma are placed inside a groove 0.5 mm deep 2.0 mm from the leading edge of the cavity. This is done to prevent instability in the plasma and to keep it from being blown downstream before it can fully develop. The leading edge of the cavity is comprised of the ceramic insert. The cavity is 1.0” in length and 0.27” deep. Two Kulite pressure transducers are embedded in the floor of the cavity to acquire the pressure

151

Page 174: nasa / ohio space grant consortium 2005-2006 annual student

fluctuations within the cavity. Also, static pressure taps are located on the roof of the cavity nozzle extension from 0.25” before the cavity to 1.50” passed the cavity at 0.25” spacing. These are used to acquire the Mach number before, over, and passed the cavity. The interior of the cavity nozzle extension can be seen in Figure 1. Now that the dimensions of the cavity nozzle extension are known, the frequencies of the pressure fluctuations can be predicted. There are two prominent sources that generate tones in a flow. The first is from acoustic modes which are shown in non-dimensional form as a Strouhal number in Equation 1,

∞∞⎟⎠⎞

⎜⎝⎛==

M

n

h

L

U

LfSt n

n 2 (1)

where, f is frequency, L is the length of the cavity, h is the characteristic length between nodes, U∞ is the freestream velocity, n is the integer mode number, and M∞ is the freestream Mach number. These frequencies can be generated from the longitudinal direction over the cavity, transverse direction or orthogonal to the floor of the cavity, and lateral direction or spanwise across the cavity. The second method is from the turbulence structures in the flow. Rossiter modes are a semi-empirical formula developed in the 1950’s to predict the resonant frequencies in flow over cavities (Rossiter 1964). This formula is shown in its non-dimensional form in Equation 2,

βγ

ε

1

2

11

21

2 +⎭⎬⎫

⎩⎨⎧

⎥⎦⎤

⎢⎣⎡ −+

−==−

∞∞

∞MM

n

U

LfSt n

n (2)

where, f is frequency, L is the length of the cavity, U∞ and M∞ is the freestream velocity and Mach number, n is the integer mode number of structures spanning a cavity length, ε is the phase lag, γ is the ratio of specific heat, and β is the ratio of the convective speed of the large scale structures to the freestream velocity. The predicted frequencies as a function of Mach number are shown in Figure 2. It is important to note that the intersection of the Rossiter and acoustic modes provides the best prediction of where the dominant generated frequency will occur; however, there is no exact method to predict which frequency will react. Results Obtained During experimentation, the stagnation pressure, Po, was adjusted incrementally and the static pressure was measured at the leading edge of the cavity in order to determine the Mach number of the flow as it passed over the cavity. The ratio of stagnation pressure to static pressure in isentropic flow was used to calculate the Mach number. As the stagnation pressure was increased, the Mach number increased across the cavity. The Mach number in relation to the position of the cavity is shown in Figure 3. It is important to note that as the Mach number was increased over the cavity, the Mach number increased even more downstream of the cavity. Mach 0.75 was the limit of the flow over the cavity. As Figure 3 shows, any increase in the flow velocity over the cavity caused the flow to choke downstream of the cavity. Once the flow is choked, the flow cannot increase in velocity. It is hypothesized that the separation of the shear layer from the cavity edge and its growth over the cavity creates a decrease in area from the top of the extension to the top of the shear layer. Since the area has been reduced, the flow increases in velocity until it reaches Mach 1.0 or sonic conditions where the limit of flow without a diverging section occurs. The flow velocity was then swept from a stagnation pressure of 2.0 psig (Mach 0.48) to 12.0 psig (Mach 0.75) to record the frequency spectra of the flows at various Mach numbers. The Kulite pressure transducer and LabView was used to record the pressure fluctuations. The signal was then converted into a sound pressure level (SPL, dB) using a reference pressure of 20 µPa. A spectrogram showing the frequency content and amplitude for the various flow regimes is shown in Figure 4. A sound pressure level of 150 dB is known to be physically harmful to humans and is regarded as the maximum level that the electronics in a weapon can sustain before malfunction could occur (Steinburg 1988).

152

Page 175: nasa / ohio space grant consortium 2005-2006 annual student

At this point in testing, plasma actuation has not been conducted to determine its effect on the pressure fluctuations in the cavity. These tests will occur in the next few weeks. Acknowledgments The author of this paper would like to thank Professor Mo Samimy for the opportunity to conduct research at the GDTL as well as for all the assistance he provided during this project. He would also like to thank Marc Blohm, Edgar Caraballo, Dr. Marco Debiasi, Dr. Jacob George, Jeff Kastner, and Jesse Little for all of their helpful discussions, technical knowledge, and assistance in conducting experiments. Figures

Figure 1. Model of Cavity Nozzle Extension Showing Interior and Attachment to Converging Nozzle.

Figure 2. Prediction of possible acoustic and Rossiter modes for a cavity of one inch in length as a

function of Mach number.

153

Page 176: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Mach number through cavity nozzle extension as flow travels over and passed the cavity. Cavity length is 1” for this experiment.

Figure 4. Subsonic spectrogram of cavity flow from a Mach 0.48 to 0.75.

References 1. Debiasi, M., Samimy, M. “Logic-Based Active Control of Subsonic Cavity Flow Resonance.”

AIAA Journal 42.9 (2004): 1901-1909. 2. Rossiter, J. E. “Wind Tunnel Experiments on the Flow over Rectangular Cavities at Subsonic and

Transonic Speeds.” RAE TR 64037 and Aeronautical Research Council, Repts. and Memoranda No. 3438 (1964).

3. Samimy, M., Debiasi, M., Caraballo, E., Malone, J., Little, J., Özbay, H., Efe, P., Yan, P., Yuan, X., DeBonis, J. “Exploring Strategies For Closed-Loop Cavity Flow Control.” AIAA Paper 2004-0576 (2004): 1-16.

4. Steinburg, D. S. Vibration Analysis for Electronic Equipment. Ed. 2. John Wiley & Sons, Inc. (1988).

SPL (dB)

154

Page 177: nasa / ohio space grant consortium 2005-2006 annual student

Assurance Technology Center (ATC)

Student Researcher: Derrick Moore, Jr.

Advisor: Ellen Blahut

Case Western Reserve University Computer Engineering

Abstract The mission of the Assurance Technology Center (ATC) is to assist the Office of Safety and Mission Assurance (SMA) in the management of Agency SMA activities in the four areas: Education and Career Management; Mishap Investigation; Knowledge Management; and SMA Research/Development. The vision of the ATC is to weave together the fabric of NASA Safety and Mission Assurance expertise and guide the growth of knowledge, thereby ensuring its continuance to future practitioners. The education and career management area is the source of a majority of my work this summer. There is an extensive amount of educational resources available to SMA professionals throughout NASA. The ATC strives to centralize this step of information. My work this summer was on the Training Research Database of the ATC website. This database is a collection of Safety & Mission Assurance training resources currently utilized at the field centers – from sources both internal and external to NASA. This tool is intended to be a centralized resource for assisting SMA practitioners and managers with finding technical training – this resulting in increased SMA technical expertise. My assignment was to

check and make necessary changes on the training resource database. This included but was not limited to developing and executing a checklist for the various educational resources on the training resource database and to make sure each link was functioning properly. This also included making sure that each resource that required a course description and contact information had one. Also, as part of my summer internship, I worked

with a second mentor, Chris Bunk, with whom I did work significant with the Video Nuggets Library of the Process Based Mission Assurance website. The Video Nugget Library provides an opportunity for others to watch and listen to NASA employees as they share their expertise in a large variety of different areas. My assignment in this area was to assess the current status of their archived Video nuggets and to take those along with the video nuggets of the PBMA website and re-organize them. I developed a new filing and naming system for the video nuggets with which I re-filed the video nuggets and their transcripts and prepared them to be archived to DVD.

155

Page 178: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments I would like to acknowledge and thank the following:

Ellen Blahut Maria Havenhill Chris Bunk Suzanne Otero Frank Robinson Steve Lily Tiffany Kennedy Jennifer Jones Susan Gott Katherine Shupp Laura Stacko Darla Kimbro Akua Soadwa Tenzi Szabo NASA Community

156

Page 179: nasa / ohio space grant consortium 2005-2006 annual student

Structural Analysis of HALE Aircraft Wing Design

Student Researcher: Heather N. Mulcahey

Advisor: Dr. Urmila Ghia

University of Cincinnati Department of Mechanical Engineering

Abstract The design of HALE (high-altitude long-endurance) aircraft wings is more complex than that of ordinary aircraft. The aircraft performs reconnaissance missions, for which placement of sensor equipment is required in the aircraft wings. My initial focus was to determine where this equipment can be best placed for structural stability in the wings, and to also have a broad range of surveillance from the aircraft. Through the use of the finite element analysis software ANSYS, I have tested various mass distributions and fluid loads on a simplified model of the HALE aircraft wings. My results show that the sensory equipment is best distributed across the central area of the aircraft, as opposed to being placed near the wing tips (See Figure 2). Also, by placing the equipment on the inner portion of the wings, the equipment can be distributed over a larger area, while still minimizing the deformation of the aircraft. I have continued my work by performing modal (vibration) analyses on the model (in ANSYS) to determine various mode shapes for the structure. Since the wings of the HALE aircraft are much longer than those for an ordinary plane, they are more sensitive to vibrations. The deformed mode shapes, and the frequencies at which these occur, are important in the design of this un-manned vehicle. Also due to the large wing span of the HALE aircraft, the structure can be prone to buckling under the fluid load. Thus, a linear (eigenvalue) buckling analysis was performed on the structure in ANSYS, which yielded a critical load factor of λ = 3.868. All of the data acquired from this research will be used in future design of the joined-wing structure. Project Objectives The ultimate goal of this work is to find the best wing design for a HALE aircraft, based off of structural analyses. The sensor equipment necessary for these tasks is located within the HALE aircraft wings. As a result, the wing sections are thick, and can not have the high camber required for large lift. Currently, HALE aircraft are capable of remaining in the air for a period of about twenty-four hours. The goal of my research is to examine a vehicle that is capable of a flight duration of six to eight days at a time. These factors and the low-density environment at high-altitudes necessitate that the wing have a long span to provide sufficient lift. Thus, the idea behind the design of a HALE aircraft lies in the wing structure – the wing must be long enough (and light enough) to generate the necessary lift. However, a longer wing experiences more deflection/vibration and stress in the structure. Recently, the Air Force Research Laboratory (AFRL) has proposed a model that has shown that this deflection is lessened by the addition of an auxiliary wing to the aircraft. I am focusing on the structural aspects of the reinforced joined wing (Figure 1). I am doing this work with the FEA (finite element analysis) software ANSYS. This software enables creation of a finite-element representation of the wing structure, so I can perform the appropriate structural analysis on the model. My work in ANSYS has provided a good idea of where (and when) the maximum deflections occur. This has helped in optimizing the design of the model by allowing me to focus on those locations that have greater deformation and stress. Methodology One of the main concerns during the design of this aircraft is the distribution of the sensor equipment. Since we are working with an FEA model, we do not need to model the sensor equipment geometry in extreme detail. But rather, since we know the mass of the equipment to be 150 slugs, we can focus on the best weight distribution across the wings. In ANSYS, there is an option of modeling only mass (without any corresponding geometry), with a MASS21 element. I created an input file to apply these

157

Page 180: nasa / ohio space grant consortium 2005-2006 annual student

elements across the model (along with gravitational acceleration), and did so in many different arrangements. I tested distributions along both inside and the outside edge of the wings. Once I found the few arrangements that created the smallest wing deformation, I chose the option that would provide an even enough distribution for good surveillance. At this point, the optimized mass distribution was added together with the equivalent fluid load, and new deformation results were obtained for comparison. In addition to the effects of the weight of the sensor equipment, a good amount of physical vibration will occur while the aircraft is in the air. Thus, I have performed modal analyses in ANSYS to determine the structure’s natural frequencies and mode shapes. The aircraft body is symmetric; so, its geometry is only modeled for half of the aircraft, and boundary conditions are applied to simulate the full model. No other loads were applied to the model, so as to find the free natural frequency for the body. The mode shapes for each mode were documented, as well as the maximum deformation. Also due to the longer wing span of the HALE aircraft, the structure holds risk of buckling under the fluid load while in the air. A linear (eigenvalue) buckling analysis was performed on the ANSYS model to evaluate the deformed shape and first critical load value, under a pressure load equivalent to that applied by the fluid. In addition to the pressure load, the symmetrical boundary conditions at the roots were once again applied to model symmetry. Under the actual pressure load values, the ANSYS analysis was able to calculate a critical load factor, λ. Results From the mass distribution analysis, I found the sensor equipment to be best placed across the inside of the wings (see rough distribution in Figure 2). This gave a maximum deformation of 6.97 feet (zero fluid load applied), which we found to be reasonable, while also giving a fairly balanced distribution of the sensor equipment. We also looked at other distributions that gave smaller deformations, but the sensor equipment would have been too localized around the center of the aircraft, and thus would provide poor surveillance. When the fluid load was added back into the analysis with the optimized mass distribution, new results were obtained to see how the two load cases related. The corresponding pattern of deformation along the main aircraft wing, from root to tip, can be seen in Figure 3. The results of the modal analysis gave the lowest natural frequency to be .508 Hz. The second mode produced the highest deformation (.19682 ft) with the second mode shape displayed below in Figure 4. This information will be used in further analyses, so as to avoid high amounts vibration that may cause the aircraft to fail. For a structure to avoid buckling at a certain load, it must have a first buckling eigenvalue greater than 1. The linear buckling analysis in ANSYS yielded a first buckling eigenvalue of λ = 3.868. This confirms that this joined-wing structure will not buckle under only the fluid load. The corresponding deformed shape for this linear buckling analysis can be seen in Figure 5. Future work on this project will involve dynamic analysis of the structure via finite element techniques. Currently, separate work has been done in both the fluid and structural fields. Dynamic analysis will combine these two fields into a fluid-structure interaction model. This analysis is more complex, but should provide more accurate results for the structure as a whole. Acknowledgments The author would like to give thanks to Dr. Urmila Ghia for acting as her advisor during this process, as well as for offering her expertise in the research field. The author would also like to thank Miss Valentina Kaloyanova, and Mr. Prabu Ganesh Ravindren, the graduate students leading this research, for their guidance and cooperation on the project.

158

Page 181: nasa / ohio space grant consortium 2005-2006 annual student

Figures

Figure 1. Joined-wing Configuration for HALE Aircraft.

Figure 2. ANSYS Symmetrical Model Used for Analyses.

The total mass of 150 slugs was applied to the ANSYS model as depicted in Figure 2, with a different portion of the total mass applied evenly across each section. The optimized distribution, including the fluid load, can be seen in Table 1.

Table 1. Results Comparison when Fluid and Mass Loads are Applied.

159

Page 182: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Deformation (ft) vs. Main Wing Length (ft), Root to Tip

Figure 4. 2nd Mode Shape of Structure, Frequency = 2.0087 Hz.

Figure 5. Buckled Shape (Linear Buckling Analysis).

References 1. ANSYS Release 9.0 Documentation. 2004. 2. Kaloyanova, Valentina B. Structural Modeling and Optimization of the Joined Wing of a High-

Altitude Long-Endurance (HALE) Aircraft. 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada. January 2005.

160

Page 183: nasa / ohio space grant consortium 2005-2006 annual student

A Neural Network Based State of Charge Predictor for Lithium Ion Battery Cells

Student Researcher: Mike Orra

Advisor: Thomas A. Stuart, PhD

The University of Toledo Electrical Engineering and Computer Science

Abstract Large lithium ion battery packs are being proposed as viable energy storage systems for many high-power applications, including hybrid-electric vehicles (HEVs), space vehicles and satellites. In order to ensure optimal as well as safe operation of the battery pack, it is necessary to implement a more sophisticated battery management system (BMS) than is typically found in consumer electronics. Such BMSs are responsible for diagnostics monitoring and data collection of battery cell conditions, equalizing cells in the battery pack and determining the battery pack’s state of charge to, in turn, drive the charge control algorithm. The ability to accurately assess the state of charge of a battery pack is one of the most critical tasks a BMS performs. State of charge (SOC) is simply a measure of how much capacity remains on a battery versus its rated capacity. Presently, the most practical and effective method for determining battery SOC is coulomb counting. Coulomb counting is a time based integration of the amount of current flowing into and out of the battery pack, i.e. net charge. By tracking the amount of charge flowing into and out of the battery pack, the BMS is able to update the SOC value accordingly (assuming the initial SOC value is accurate). Because coulomb counting errors are cumulative, the SOC must be periodically reset to an accurate value. Furthermore, there are additional losses that take place in the battery’s internal chemistry that coulomb counting does not account for. Consequently, additional SOC estimation errors may result. Lithium ion cells have linear SOC to stabilized open circuit voltage (SOCV) characteristic curves. Furthermore, lithium cells reach their SOCV very rapidly after being “open circuited.” This means that SOC can be accurately determined if a cell’s stabilized open circuit voltage (SOCV) can be measured. In order to take advantage of the SOC vs. SOCV characteristic curve, a relationship between the battery cell’s operating circumstances and its corresponding open circuit voltage must be developed. Because there is no closed form model that allows us to develop such a relationship, an empirical approach must be taken. Neural networks constitute one such approach. They are computing models that can be trained to map highly nonlinear functions, recognize and classify patterns and perform time series forecasting. This research proposes their use in predicting SOC for lithium ion battery cells based on a given cycling pattern. The training data set will be explicitly comprised of the ambient temperature, instantaneous current flow, open circuit cell voltages and closed circuit cell voltages. Additional parameters such as cell age and cycling history are inherently manifested in the training data. Successful training of neural networks for forecasting SOC will reduce BMS system complexity and cost by eliminating coulomb counting circuitry. Moreover, BMSs using the proposed method will be better suited to use inexpensive dissipative equalizers to balance cells, further minimizing system cost. The effects of the dissipative equalizer will be inherently manifested in the training data, and thus accounted for by the neural network. Finally, by accounting for those parameters affecting SOC that are often disregarded in present BMSs (cell age, number of cycles, past operating conditions, etc.), neural networks should be better able to estimate SOC. Project Objectives Determination of a battery cell’s state of charge (SOC) is one of the most important functions that a Battery Management System (BMS) performs, especially when lithium ion cells and high power (a potential safety hazard) are involved. Accurate assessment of SOC helps BMSs maintain safe and optimal operation for battery packs. Many present day BMSs utilize coulomb counting as their means of determining SOC. Unfortunately, there are internal losses in batteries that coulomb counting does not

161

Page 184: nasa / ohio space grant consortium 2005-2006 annual student

account for, resulting in SOC estimation errors. In addition, current BMSs often disregard parameters affecting battery behavior, such as cycling history and cell age. While some complex software algorithms have been developed to compensate for these problems, the fact remains that there is no closed-form model that can accurately indicate how these various factors affect cell behavior. Hence, an empirical approach needs to be taken. The objective of this project was to examine the feasibility of using neural networks as a practical solution to SOC prediction by leveraging the linear relationship that exists between a lithium ion battery’s SOC and its stabilized open circuit voltage (SOCV). Data sets used in the development of an empirical model of lithium battery behavior were generated by taking measurements of closed circuit cell voltages and their corresponding open circuit voltages. These parameters were captured under varying ambient temperatures and current flows, which also were recorded. This information was used to train a multitude of feedforward neural networks (configured with various architectural and computational attributes) to output the SOCV given the closed circuit cell voltages, ambient temperature and instantaneous current flow. Another important attribute of a battery management system (BMS) is its ability to not only effectively monitor the voltages on all battery cells in the pack, but to equalize the charge among them. A high voltage cell limits the maximum charge the pack can deliver, while a low voltage cell limits the minimum pack discharge. Therefore, the maximum pack capacity is achieved if all cell voltages in a battery string are balanced. Maintaining equal voltages results not only in improved performance (in terms of the battery pack’s ability to supply the power being demanded by a given load), but also extended battery life. This study also examined the use of inexpensive dissipative equalizers in equalizing charge among lithium ion battery strings.

Methodology Used Data Acquisition: While parameters such as current and temperature could be measured using standard commercially available components, individual cell voltage measurements had to be preprocessed by a voltage transfer circuit so that each measurement could be taken with respect to a common ground reference. It has been shown that using an op-amp based transfer circuit is more accurate than other designs using discrete components, as there are no component matching issues [1]. Furthermore, an op-amp based circuit significantly reduces part count and complexity of the transfer circuit. The circuit shown in Figure 2 represents the voltage transfer circuit to be used in our design [1]. The method of operation for the transfer circuit will be described for taking the measurement of VBn-3. The voltage appearing at the positive input terminal of op-amp an-3 is equal to VBn-4. The voltage appearing at the negative input terminal of the op-amp an-3 is equal to the i1R1. The op-amp is going to seek equilibrium between the voltages at its input terminals. If the voltage at its negative input terminal is greater than that of the positive input terminal the voltage output Vo will be driven to the negative supply rail (approximately 0V). This will turn the p-channel MOSFET on and current i1 will flow. Vo will continually adjust until equilibrium is achieved at the op-amp’s input terminals [1]. That is,

114 RiVBn =− (1)

Thus, our voltage measurement VMn-3 will be given by the following equation as a function of VBn-3:

31

2213 * −− == Bnmn V

RRRiV (2)

In this design, MXL1179 low-power, precision op-amps from Maxim Semiconductor were used. They have a voltage offset of 5.5µV/ ºC and can produce reasonably accurate measurements over a wide temperature range. For example, a 40 ºC temperature range (typical range of temperature experienced by automobiles) will result in a variance in temperature of only:

∆V = ±5.5µV/ ºC x 40 ºC = 0.220mV

For 4V cells, this results in a measuring error of ±0.0055%. This circuit was powered off of the battery pack. The use of these low-power op-amps helps keep power consumption to a minimum (0.54mW per op amp) while the transfer circuit is active; however, when the

162

Page 185: nasa / ohio space grant consortium 2005-2006 annual student

circuit is inactive, it will be necessary to isolate the circuit from the battery pack. Failure to do so could, over an extended period of time, result in the batteries being drained by the transfer circuit.

Cell Equalization: There are two methods of equalization that can be employed: dissipative and redistributive. A dissipative equalizer circuit is one that bleeds excess charge off of battery cells. The charge is lost as heat when current flows through a resistor. This process continues in the battery string until all batteries have had their voltages reduced to the level of the lowest voltage cell in the string. A redistributive equalizer circuit is one that, as its name suggests, redistributes charge from the higher voltage cells lower voltage cells until balance is achieved.

The equalizer circuit examined in this study is shown in Figure 1. The circuit design was initially developed by Maxwell Technologies, Inc. for the purpose of balancing ultracapacitors [2]. However, the design can be modified to balance battery cells. The op-amp in this circuit is configured to operate as a voltage follower. Therefore, the op-amp will seek to make its output voltage equal to that of the voltage appearing at its positive input terminal. The mid-point voltage between batteries BT1 and BT2 is fed into the negative input terminal via a feedback resistor. The positive input terminal is connected to the midpoint of a resistor divider equal to half of the sum of voltages VBT1 and VBT2. If the voltage on battery BT2 is higher than that of battery BT1, the op-amp’s output will drive transistor Q1, establishing a path for current to flow from the positive terminal of battery BT2 through the op-amps positive supply input, the base of Q1 and the power resistor R6. This effectively bleeds the charge off of BT2 until VBT2 is precisely equal to VBT1. Similarly, if battery BT1 is at a higher voltage than battery BT2, VBT1 will be bled off until it is equal to VBT2.

The battery pack used in this study was comprised of 4 volt cells. Thus, once the cells are balanced, this circuit will consume less than 55 µA of current (40µA for the resistor divider and 15µA for the op-amp). Each pair of cells in a battery string will require one of these equalizer circuits to maintain balance between them. Therefore, a battery string of 8 cells would require 7 circuits that would each consume 55 µA. Again, for 4V cells this results in a total power loss of 3.08mW.

Neural Network Development: In order for a network to learn, a sequence of input data and their corresponding outputs is required. The network produces an output based on the input data presented to it. The network’s output is then compared to the known corresponding output that we would like the system to generate. The network’s weights and biases are then adjusted accordingly to better calculate the desired response. In this study, training data sets were generated by taking measurements of the instantaneous current flow, ambient temperature, and open and closed circuit cell voltages while cycling the battery pack under varying ambient temperatures [10ºC - 40 ºC] and current flows [1A - 3A]. The duty cycle of the loading pattern for the battery was arbitrarily chosen to be 50% with a period of 30 seconds. A training algorithm is typically used to repeat this procedure, continuously moving the network’s output closer to the desired output response.

In this study, the error back propagation training algorithm (EBPTA) was used. EBPTA is gradient descent based. That is, the gradient of the error function is computed with respect to the training cycle, and its negative value is taken as the direction that will result in reduced output error. Conceptually speaking, it is analogous to a hiker who has been stranded on top of a mountain under foggy conditions and needs to reach the bottom [3]. Not having any other means of guidance, the hiker can examine the immediate area around him and, naturally, assume that the fastest way to the bottom is to see where the mountain’s slope increases steepest (the gradient of his position) and proceed in the downward (negative) direction [3]. Using this approach, the network will continue to update its weights and biases until it has reached a predefined acceptable level of error or the error function is at its minimum. That is to say, further adjustment of the weights and biases would result in increased error. One obvious drawback of this approach is that training may conclude prematurely as a result of local minima that may exist in the error function. Certain techniques, however, such as taking into account a “momentum” term in the weight and bias adjustments can help minimize the risk of premature training termination due to local minima.

Numerous neural network architectures have been developed, some of which are used to perform specific tasks such as pattern recognition, forecasting, classification, function-approximation, etc… Because neural

163

Page 186: nasa / ohio space grant consortium 2005-2006 annual student

network development is an “imprecise” science, it is often difficult to determine which network architecture and training algorithm are optimal for a given application. However, the feedforward multilayer perceptron architecture is often used for modeling complex, nonlinear functions [4]. A brief literature survey of recent research involving artificial neural networks (ANNs) supports the above statements [5-8]. As a result, this study used a feedforward multilayer perceptron. There are ten inputs to the network: the closed circuit cell voltages (CCVs) of all eight cells in the battery pack, the ambient temperature and the instantaneous current flow. The network will output the corresponding SOCV of each cell in the battery pack. The network topology is shown in Figure 5. The number of hidden layer neurons is variable, allowing for empirical determination of the optimal number (if any). While there is no precise science behind selection of the optimal number of hidden neurons to use, Kolmogorov’s Theorem suggests limiting the number of hidden layer neurons to 2n+1, where n is the number of network inputs [5]. In this study, the number of hidden layer neurons to be considered was set to a range of [1,20]; note that all network configurations observing the suggested 2n+1 hidden neuron limit are included in the given range.

Another important consideration of neural network development is which transfer function (also referred to as the activation function) or combination of transfer functions should be used to model the network’s neurons. All neurons in a given layer typically have the same transfer function. While any differentiable function may be used, the saturated linear (satlin) and unipolar sigmoid functions are commonly used and thus, will be considered in this study. The satlin transfer function’s output is bounded between 0 and 1 according to the following piecewise smooth function [9]:

0)( =nets if 0≤net netnets =)( if 10 ≤≤ net 1)( =nets if net≤1

Its plot is shown in Figure 3. The unipolar sigmoid function is defined by:

)(11),( nete

netuλ

λ−+

=

where net is the neural input and λ governs the “steepness” of the sigmoid curve. Figure 4 shows the plot of the unipolar sigmoid function with λ equal to 1.

Any programming language, such as C can be used for neural network development; however, Matlab’s Neural Network Toolbox [10] offers a number prewritten functions that make it an appealing and convenient platform for creating, training and testing neural networks. Therefore, in this research project, all neural network development, training and testing was carried out using the Matlab computing platform by Mathworks. The Matlab script written for this study begins its execution by prompting the user for a number of developmental parameters, including range of hidden neurons to be tested, training algorithm, learning algorithm, training data set, maximum error cycles and desired error goal. The Matlab script developed in this study was preprogrammed to apply all four combinations of the aforementioned activation functions being considered to the various neural network configurations under consideration.

It is important to note that current will flow into and out of the battery pack and hence both positive and negative current values will be measured. Clearly, passing negative input values to a neural network that is intended to operate on data ranging from [0,1] is problematic. It is reasonable to consider SOC forecasting during charging to be a different task (i.e. a different model) than SOC forecasting during discharging, thus, two separate networks can be trained and tested. Based on the topology of the hardware, the value of the current is negative when charge is flowing into the battery pack. Having a separate network forecast SOC while charge enters the battery pack circumvents the negative current value issue by operating on the absolute value of the current measurement. For future reference in this document, the network tasked with forecasting SOC under charging conditions will be referred to as netc and the network tasked with forecasting SOC under discharging conditions will be referred to as netd.

Testing and Results Netc’s predictive error (for all given input patterns) ranged from as little as 2.49% to as much as 22.69%, depending on its particular configuration. Netc performed best when it was configured with 10 hidden layer neurons, and satlin and unipolar sigmoid were used as the hidden and output layer transfer functions,

164

Page 187: nasa / ohio space grant consortium 2005-2006 annual student

respectively. However, it should be noted that many of the networks tested (~50%) were able to consistently predict SOCV with estimation errors of 5% (MSE) or less. Examination of subsequent table entries reveals that similar predictive performance can be achieved using as little as 2 or 3 hidden layer neurons and any of the transfer function combinations tested. In contrast, however, most of the better performing configurations of netc use satlin as the activation function for all hidden layer neurons with either satlin or unipolar sigmoid activation functions for the output layer neurons. Conversely, netc typically performed worse when configured to use the unipolar sigmoid as the activation function for the hidden layer neurons. Also, there does not appear to be any trends indicating a strong relationship between the number of hidden layer neurons and the network’s performance. The majority of netd’s better performing configurations also utilized satlin as the hidden layer activation function with no apparent preference on the activation function used for the output layer. Netd’s predictive error (for all given input patterns) ranged from as little as 1.14% to as much as 21.8%, depending on its particular configuration. Netd’s estimation performance was best when the network was configured with just 1 hidden layer neuron and unipolar sigmoid and satlin as the hidden and output layer activation functions, respectively. Many of the networks tested herein (~50%) were able to consistently predict SOCV with estimation errors of 2% (MSE) or less. As with netc, it should be noted that in this case there was no obvious trend suggesting that any particular number of hidden layer neurons was optimal. Certain networks configured with as little as two or three hidden layer neurons were able to consistently predict SOCV with accuracies greater than 96% (MSE). The fact that both netd and netc were able to achieve excellent predictive abilities (1.18% and 3.17% error, respectively) with as little as 2 hidden layer neurons, suggests that, perhaps, the behavior of the lithium-ion battery cells is not as highly nonlinear as previously considered. Although some networks tested consistently gave estimation errors of over 20% MSE (likely due to inadequate network training), the results seen in this study largely reflect favorably upon the use of neural networks in predicting SOCV, and hence SOC, for lithium ion batteries. Additional data sets should be generated using similar cycling patterns as well as dynamic load patterns. Further experimental tests should be conducted using such data sets in order verify the trends seen in this study, as well as examine SOC estimation performance for dynamic cycling patterns. Nonetheless, the initial results are very compelling and provide motivation for further development. For example, the neural model could be expanded into a more robust system by processing additional data inputs, such as battery cell surface temperature, the loading pattern duty cycle and period. The system would also significantly benefit from the use of self-learning algorithms. SOCV measurements could be taken during long periods of system inactivity and utilized by the self-learning network as a mechanism of continual calibration, thus ensuring that parameters such as cell aging and cycling history are always accounted for. Figures

R1

R2

R3

R6

R4

R5

Op Amp

VBT2

VBT1(VBT1+VBT2) / 2

4VBT2

4VBT1

100k

100k

49.9k

5.61

28

28

2

36

47

1

8

5

TLV2370

Q1

Q2

Figure 1. Equalizer Circuit.

165

Page 188: nasa / ohio space grant consortium 2005-2006 annual student

1Q

2

36

47

1

8

5

nA1R

4R

ssV

V4nBV

V41-nBV

V42-nBV

V43-nBV

V44-nBV

V42BV

V41BV

CCV

CCV

2R

1Q

2

36

47

1

8

5

1-nA1R

4R

ssV

CCV

2R

1Q

2

36

47

1

8

5

2-nA1R

4R

ssV

CCV

2R

1Q

2

36

47

1

8

5

3-nA1R

4R

ssV

CCV

2R

iV+

-

gsV+

-

oV+

-

ssV

nmV

1-nmV

2-nmV

3-nmV

4R 3R

2mV

1mV

1i

1ii1

Figure 2. Voltage Transfer Circuit.

-5 -4 -3 -2 -1 0 1 2 3 4 50

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

net

s(ne

t)

Figure 3. Saturated linear function plot.

166

Page 189: nasa / ohio space grant consortium 2005-2006 annual student

-5 -4 -3 -2 -1 0 1 2 3 4 5-0.1

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

net

u(la

mbd

a, n

et)

Figure 4. Unipolar sigmoid plot u(λ ,net) with λ = 1.

.

.

.

z0

z1

z2 y0

y1

yj

z1

z2

z3

z3

z4

z5

z6

z1

z2

z3

z7

z8

z9

o0

o1

z1

z2

z3

o2

o3

o4

o5

z1

z2

o6

o7

CCV[0](V)

CCV[1](V)

CCV[2](V)

CCV[3](V)

CCV[4](V)

CCV[5](V)

CCV[7](V)

InstantaneousCurrentFlow (A)

AmbientTemperature(ºC)

CCV[6](V)

SOCV[0](V)

SOCV[1](V)

SOCV[2](V)

SOCV[3](V)

SOCV[4](V)

SOCV[5](V)

SOCV[6](V)

SOCV[7](V)

Figure 5. Neural Network Topology.

Acknowledgements I would like to thank the Ohio Space Grant Consortium for the opportunity to pursue this line of research through their generous support, as well as my advisor, Dr. Thomas A. Stuart, for his guidance and expertise.

167

Page 190: nasa / ohio space grant consortium 2005-2006 annual student

References 1. X. Wang and T. Stuart, “An Op Amp Transfer Circuit to Measure Voltages in Battery Strings,”

Journal of Power Sources vol. 109, 2002, pp 253-261. 2. Maxwell Technologies, Inc., Charge Balancing Circuit, US Patent 6,806,686, October 19, 2004. 3. Unknown source. 4. K. Kalaitzakis, G.S. Stavrakakis and E.M. Anagnostakis, “Short-term load forecasting based on

artificial neural networks parallel implementation,” Electric Power Systems Research, Vol. 63, pg. 187, 2002.

5. H. Jo, Ingoo Han and Hoonyoung Lee, “Bankruptcy Prediction Using Case-Based Reasoning, Neural Networks, and Discriminant Analysis,” Expert Systems With Applications, Vol. 13, No. 2, pg. 101, 1997.

6. K. Kalaitzakis, G.S. Stavrakakis and E.M. Anagnostakis, “Short-term load forecasting based on artificial neural networks parallel implementation,” Electric Power Systems Research, Vol. 63, pg. 187, 2002.

7. A. More and M.C. Deo, “Forecasting wind with neural networks,” Marine Structures, Vol. 16, pg. 48, 2003.

8. M. Beccali, M. Cellura, V. Lo Brano, and A. Marvuglia, “Forecasting daily urban electric load profiles using artificial neural networks,” Energy Conversion and Management, Vol. 45, pg. 2899, 2004.

9. H. Demuth, M. Beale and M. Hagan, Neural Network Toolbox User’s Guide, The MathWorks, Inc., version 4, 2005.

10. http://www.mathworks.com/products/neuralnet/

168

Page 191: nasa / ohio space grant consortium 2005-2006 annual student

Helmet-Mounted Display (HMD) Interface Design for Head-Up Display (HUD) Replacement

Student Researcher: Susan B. Plano

Advisor: Dr. Jennie J. Gallimore

Wright State University Department of Biomedical, Industrial and Human Factors Engineering

Abstract The motivation for this research is to provide cognitively coherent and performance enhancing interfaces to new cockpit systems. The military seeks innovative, effective, efficient, and safe components and subsystems for integration, into an operational helmet-mounted-display (HMD)-based no-HUD aircraft (Head-Up Display, the current standard flight reference panel.) Such new interfaces include ocular displays in varying configurations for pilots and crew, whose design will support operational mission scenarios. Following past experiments evaluating performance impacts of an HMD-based no-HUD aircraft pilot interface for air to ground (A/G), air to air (A/A), and navigation flight phases, work with fusion researchers has continued, using ATR (automated target recognition) data from HRR (high-range resolution radar) simulated experiments. Measures of performance and measures of effectiveness have been characterized, and future needs projected, for the fusion communities of ATR, image processing, tracking and ID, and surveillance, especially in the defense community. The future integration of outcomes from these experiments with predictive neural nets from information fusion research will aid designs for automation responsive to pilots (operators) under varying workloads, stressors and types of fused information presented. The current aim is to develop a neural net predictive model of this work environment and operator performance, using computational structures and relationships which can be adjusted in current and future systems. This report describes the first exploratory neural network development, first predicting test responses by EEG patterns, with discussion connecting this type of effort with future cybersickness prediction and decision aiding in operators. Project Objectives The goal of this project is to practice the implementation of just two of the broad range of network architectures employed in modern research, exploring the utility and limitations of such learning algorithms for particular applications. Using published EEG data comprised of a training set and an uncharacterized test set, I attempt to robustly classify the unknown EEG data as arising from either a left finger movement or a right finger movement. A successful classification algorithm is a step toward implementing user interaction with, and control over, automation by creating electrical activity by thinking in discrete patterns that can be thus extracted. The other objective in cybersickness or spatial disorientation or loss of consciousness would be to have the automated systems detect and help to thwart or recover from such events in the human. This experiment contrasts back propagation and forward propagation, using different seed parameters and weighting schemes. The weighting schemes are adjusted as the experimenter believes the literature would suggest that the strength of signal would flow through such a network architecture. I attempt to interpret current use of particular transfer functions, in the literature, by using different Matlab routines. The introduction of the three-dimensional ROC (receiver operator characteristic) curve aids the analysis as robustness of predictions can be measured in part by the relationship of the FA rate to the successful detection rate of the event of interest. Methodology

• Apparatus. Data were collected using 28 EEG sensors, and fused into 6 channels. A/D-converter: Computer Boards PCIM-DAS1602/16 bit. Amplitude range: +/-1000 µV, sampling rate: 256 c/s Data. The subject moved a finger in one of two directions. The test data was uncoded for

direction, and the replicates were randomized in the training data. The EEG amplitudes were reported by channel, instead of by the characteristic fused frequency bands. The data comes from the 500 ms

169

Page 192: nasa / ohio space grant consortium 2005-2006 annual student

immediately preceding the muscle movement. This is expected to be a reasonable window of detection and reaction time for automation to prosecute such human intention, if it can be predicted well. Results: Summary of Algorithm Trials 87.4% correct classification was attained using a 2 hidden-layer back-propagation network. However, the window of time was not sufficient, by preliminary ROC curve analysis, to permit interaction to correct a critical situation. The ROC curve did not quickly enough achieve a suitable balance between false alarms (misclassifications as movement) and correct detections of impending movement. This was a useful quantitative post-hoc analysis as the number of epochs to converge to the classifications qualitatively seemed ‘fast’. Significance and Interpretation of Results My research aim is to link technology to the human operator. A) Understanding the human domain: In the first phase of the research, we tested pilots as they were making decisions. The experiments underscored important facets of symbology, as literature predicts, and the key characteristics of human-system interaction. I analyzed the results and concluded with subjective measures of interface displays. Both objective and subjective data demonstrated the direct impact of symbology on a pilot’s DM, giving insights to better design processes in a human-systems engineering paradigm. This research laid the groundwork for understanding pilots’ complex decision-making process in high workload dynamic simulations. B) Assessing the Display of Multiple Data Sets: Additionally, I had been looking at presenting fused data on a display. Some of the information fusion techniques include neural networks, Bayesian networks, and Dempster-Shafer methods. All of these techniques afford DM for the machine, yet presenting the information requires a link between the user and the machine. Classic methods include expert systems which mimic the decision maker with a set of rules. Neural nets require systematic training and can be used with expert systems to determine DM rules representative of an expert. Bayes nets aggregate probabilities to dynamically determine the complex DM. DS methods allow confidence factors by assessing conflicting information. C) Integrating the Display with the User: The third phase of the study will be to use classic information fusion methodologies to assist in the assessment of decision makers. By assessing the workload and stress of an operator making decisions, we can better design decision-aiding tools. I will bring together the display technology, experimental testing, and biometric sensor data as indicators and proxy measures for workload and stress under decision aiding situations. Analysis will include techniques from information fusion (NN, Expert systems, BN, DS, etc). This phase will objectively assess the performance of the human in conjunction with the biometric data [e.g., EMG (nausea), EEG, heart monitor, O2 sensors, eye trackers; pending equipment availability]. D) Construction of predictive models: Extending my advisor’s body of work, I aim to show, by integrating operator sensor data, assessing stress and workload, with measures of human-system interface utility, that a robust predictive model of performance and physical outcomes for pilots will contribute novel design points for, and hooks into, a unified decision-aiding tool. To these ends, the work this year which began predictive modeling using published EEG data, will continue refinement of methods to work with EMG data for cybersickness, next.

170

Page 193: nasa / ohio space grant consortium 2005-2006 annual student

Figures

Figure 1. 6 channels (grayscale less informative than color depictions) for training sets and test set. Fourth panel depicts binary nature of classifier required.

Figure 2. Vectors from sensors which overlap greatly and require multidimensional classifier. The hidden nodes, or the ‘black-box’ nature of the neural network attempt to find weights for the nodes that successfully discriminate the data by known outcome.

171

Page 194: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Classification of vectors. Note in first panel that most vectors are classified properly as 0 or 1. Panel 3 (lower left) is a comparison of other algorithms’ reported results, indicating best-case has not reached 90%. Panels 2 and 4 show the error characterization.

Figure 4. Depiction of a 3-dimensional Receiver Operator Characteristic Curve (3D ROC)

172

Page 195: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments and References We thank NASA, OAI, OSGC, WSU, USAF, DAGSI and Prime Contractor SDS, International, lead company on this Phase II SBIR. References (Partial listing for this interim report.) 1. D. Fellman and D. Van Essen, "Distributed Hierarchical Processing in the Primate Cerebral Cortex,"

Cerebral Cortex, V1, 1991, pp.1-47. 2. E. Kandel, J. Schwartz, and T. Jessel, Principals of Neuroscience, Appleton and Lange, CT, 1991. 3. E. P. Blasch and J. C. Gainey, "Physiological Motivated Fusion Architecture," NSSDF98, Atlanta,

GA, pg. April, 1998. 4. E. P. Blasch, “Multiresolutional Sensor Integration with Prediction”. SPIE Int. Symp. On

Aerospace/Defense Simulation and Control, Wavelet App. Orlando, FL, April 13-17, 1998, pp. 158-167.

5. Fraunhofer-FIRST, Intelligent Data Analysis Group and Freie Universität Berlin, Department of Neurology, Neurophysics Group. [EEG data]

6. J. L. Johnson, "Pulse-coupled neural nets: translation, rotation, scale, distortion, and intensity signal invariance for images." Applied Optics, Vol. 33, No. 26, 10 September 1994, pp. 6239-6253.

7. K. M. Lee, Z. Zhi, R. Blenis, and E. Blasch, “Real-time vision-based tracking control of an unmanned vehicle”. IEEE Journal of Mechatronics - Intelligent Motion Control, October 1995, pp. 973-991.

8. S. K. Rogers, et. al., Neural Networks for ATR, Neural Networks, Vol. 8, No. 7, pp. 1153-1184, 1995.

173

Page 196: nasa / ohio space grant consortium 2005-2006 annual student

Design of Net Zero Energy Campus Residence

Student Researcher: Gregory S. Raffio

Advisor: Dr. Kelly Kissock

University of Dayton Department of Mechanical and Aerospace Engineering

Abstract Global warming, pollution, and deforestation present major environmental challenges. Non-renewable fossil fuels account for 82% of the world’s energy consumption (Boyle, 2004). The use of fossil fuels is the primary contributor to global climate change and the source of the majority of all air pollution (U.S. EPA, 1994). Landfills continue to be repositories for recyclable materials and clean water is difficult to obtain in many regions of world. Many old growth forests are being harvested for lumber production, and toxic materials are prolific in our work and living spaces. In response to both global and local challenges, the University of Dayton is committed to building a net-zero energy student residence, called the Eco-house. This report summarizes both the design and cost-benefit analysis of a net-zero energy campus residence. Energy use of current student houses is presented to provide a baseline for determining energy savings. The use of the whole-system inside-out approach to guide the overall design is described. Using the inside-out method, the energy impacts of occupant behavior, appliances and lights, building envelope, energy distribution systems and primary energy conversion equipment are discussed. The designs of solar thermal and solar photovoltaic systems to meet the hot water and electricity requirements of the house are described. Eco-house energy use is compared to the energy use of the existing houses. Cost-benefit analysis is performed the whole house. At a 5% discount rate, 5% borrowing rate for a 20 year mortgage, a 35 year lifetime, and an annual fuel escalation rate of 4%, the Eco-house can be constructed for no additional lifetime cost. Project Objectives The housing stock at the University of Dayton is representative of 20th century housing in the Midwest. Residences across campus range from early 1900s stick frame houses with no insulation and high rates of infiltration to houses constructed to meet building code of the early 2000s. UD spends significantly more than $1 million annually to provide energy to over 450 houses. While a large portion of this cost is due to irresponsible energy practices, much of the cost comes from 20th century energy-inefficient building practices. The Eco-house project is the culmination of years of building energy research that include study of both occupant and technological changes to reduce residential energy use. The main objectives of the Eco-house project are to design a net-zero energy residence that is cost effective. Methodology: Design of Net-Zero Energy Residence The inside-out approach is a structured method of analyzing opportunities for energy efficiency improvements that begins by focusing on the eventual end use of the energy and proceeds outward to the distribution system and energy conversion equipment. Application of the inside out-approach has been shown to maximize savings while minimizing first cost (Kissock et al., 2001). One reason for the success of the inside-out approach is the multiplicative effect of losses as energy is converted, distributed and used. For example, consider an electrical appliance that provides 1 kWh of useful work. If the appliance is 50% efficient, the electrical distribution system 93% efficient, and electrical power plant is 33% efficient, then, for every useful kWh provided by an appliance, the quantity of source energy consumed is: 1 kWh / (50% x 93% x 33%) = 6.5 kWh This means that reducing 1 kWh of energy at the end use (inside) results in 6.5 kWh of energy savings at the source (outside). Thus, minimizing end use energy, then distribution losses and finally improving the efficiency of the primary energy conversion equipment tends to multiply savings. For the Eco-house, this means sequentially focusing on occupant behavior, appliances and lighting, building envelope (walls, ceiling, windows, infiltration), energy distribution system (pumps, fans, radiant panels), primary space conditioning equipment (ground-source heat pump, etc.), and finally solar heating and electricity systems.

174

Page 197: nasa / ohio space grant consortium 2005-2006 annual student

Baseline House Analysis In order to quantify savings from building an Eco-house, the energy use of current student houses must be understood. The baseline house for Eco-house comparison was constructed in 2003. Walls consist of wood siding, 0.75-inch OSB sheathing, 2” x 4” wood stud frames built 16 inches on center with 4-inch fiberglass batt insulation, and ½-inch drywall on the interior surface. Assuming, winter convection coefficients, the R-value of the walls is about 13 hr-ft2-F/Btu. The double-hung windows are double-pane, with vinyl frames. The windows have an R-value of about 2 (hr-ft2-oF/ Btu) and an average solar heat gain coefficient (SHGC) of about 0.531. The roof and ceiling consist of asphalt shingles, 0.75-inch OSB sheathing, attic space, 4-inch of fiberglass batt insulation, and drywall on the interior surface. The combined R-value of the roof and ceiling is about 16 hr-ft2-F/Btu (Raffio et al., 2004). A blower door test measured the rate of infiltration to be 0.62 air changes per hour (Kissock, 2004). The houses use 80% efficient natural gas furnaces and natural gas hot water heaters with an average efficiency of about 55%. The air conditioners have a SEER of 10 (Btu/Wh). Monthly utility use for four of these five-person houses was obtained and studied. After adjusting for occupancy, average annual electricity use in a regularly occupied, un-air conditioned house is about 11,400 kWh. Electrical appliances and lighting in the houses were inventoried and approximate operating hours were observed. The power draw of each type of electrical equipment was determined from nameplates and manufacturers data. Using this data, electricity use was broken down by equipment use and calibrated to match the 11,400 kWh measured annual electricity use. Annual and peak building energy use in a typical baseline house was simulated using the ESim building energy simulation software (Kissock, 1997). ESim uses typical meteorological data (NREL, 1995) to simulate hourly loads and energy use. Figures 1 and 2 show simulated and actual electricity and natural gas use of a newly constructed five-person university house. Except for summer air conditioning, the simulations are well calibrated to the actual energy use data. Simulated electricity consumption is 13,455 kWh per year with air conditioning and simulated natural gas consumption is 61.2 mmBtu per year including heating and hot water. Occupant Behavior The Eco-house will be populated by students motivated to practice energy-conscious behavior. Students will reduce electricity consumption by using natural lighting, and turning off lights, computers and televisions when not needed. Calibration results indicate that electricity consumption could be reduced by about 33% from 11,389 kWh per year to 7,654 kWh per year through strictly occupant behavior. Such large savings were observed and documented in Occupancy and Behavioral Effects on Residential Energy Use (Seryak 2003). Appliances and Lighting The Eco-house will incorporate compact fluorescent lights and Energy Star appliances. By improving occupant behavior and using energy efficient appliances and lights, electricity consumption could be reduced to 4,985 kWh per year. This is 36% less than projected electricity consumption from solely reducing operating hours and 56% less than baseline electricity use. Building Envelope In order to reduce heating and cooling loads, Eco-house walls, ceiling, windows and perimeter insulation will have high thermal resistances. The walls and ceiling will be constructed with Structurally Insulated Panels (SIPs) from R-Control Systems. SIPs are both tighter and more insulative than framed walls (Christian 2004). The R-value for the proposed SIP walls is about 39 hr-ft2-F/Btu. The cathedral style roof/ceiling will be constructed of thicker SIPs. The R-value of the roof /ceiling is about 51 hr-ft2-F/Btu. Perimeter insulation reduces heat transfer from the basement to the ground. The Eco-house will have insulated, pre-cast basement walls with an overall R-value of 23 hr-ft2-F/Btu. Significant winter heat loss and summer heat gain occurs through windows. In addition, poorly installed windows also increase air leakage into and from the house. The North, East, and West facing windows will be low-emissivity, triple-pane windows with a center of glass U-value below 0.2 Btu/hr-ft2-F and

175

Page 198: nasa / ohio space grant consortium 2005-2006 annual student

SHGC below 0.3. On the South side, the windows will be low-emissivity, triple-pane windows with a center of glass U-value below 0.35 Btu/hr-ft2-F and SHGC above 0.4. Houses constructed with SIPs are far more airtight than typical frame houses, and require mechanical ventilation to maintain fresh indoor air. ASHRAE recommends a minimum ventilation rate of about 0.35 air changes per hour to prevent the build up of indoor air pollutants (ASHRAE, 1989). The Eco-house will have an energy recovery ventilator (ERV) to pre-condition outside air by exchanging energy between the intake and exhaust air streams. To provide 0.35 air changes per hour, the energy recovery ventilator will provide about 75 cfm with an effectiveness of 52%. Energy Distribution System and Space Conditioning Typical UD student houses are heated by furnaces and cooled by air conditioners. In these houses, a constant air volume distribution fan blows air over heating and cooling coils, through ducts to the conditioned space. Simulation results, which assume a pressure drop of 2-inwg, indicate that annual supply fan electricity use is about 1,000 kWh/yr. Eco-house heating, cooling and ventilation will also be supplied through ductwork. The Eco-house will use an air-to-air heat pump with a variable speed fan to distribute heating and cooling. According to ESim, the peak heating and cooling loads for the Eco-house are 9,260 Btu/hr and 9,851 Btu/hr. The heat pump is rated at 2 tons and has the capability to independently control air temperature and humidity. Additionally, the heat pump has internal electric resistance heating elements that supply heat when the outdoor air temperature is too low. Heat pumps are usually more efficient than electric resistance heating. However, when the outdoor air temperature falls below a certain temperature, air-to-air heat pumps become less efficient than electric resistance heating. According to system specifications, the heat pump will operate with an average COP of about 9.4 and an average cooling SEER of about 14.5 (Btu/Wh). Calculations indicate that the ½ hp variable speed fan on the air-to-air heat pump will use about 188 kWh per year in electrical energy. Solar Water Heating and Solar Photovoltaic System The inside-out approach was also applied to the hot water system. On the inside, energy and water-efficient dishwashers and clothes washers are assumed to reduce overall hot water use by 20%. In the distribution system, hot water supply temperature has been reduced from 60 C (140 F) in typical UD residences to 48.9 C (120 F). Finally, a solar thermal hot water system will be the primary source of heat for hot water. Supplemental heat will be provided by a traditional high-efficiency, electric hot water heater. Energy use for domestic hot water was simulated using SolarSim software (Kissock 1997). SolarSim uses typical meteorological data (NREL 1995) to simulate the hourly performance of photovoltaic and solar thermal systems. Using SolarSim, a solar thermal system was designed with two 3.74 m2 solar collectors facing due south at a tilt angle of 44 degrees from the horizon. The FrTa (y-intercept of collector performance curve) is 0.74 and the FrUl (slope of performance curve) is 1.527. The heat exchanger is 80% effective and the system has 120 gallons of storage. Simulation results indicate that 97% of the solar hot water heating load will be provided by the solar system. The electric hot water heater will provide an additional 122 kWh per year in supplemental hot water heating. The Eco-house will employ a photovoltaic solar system (PV) sized to generate the total annual electricity requirements of the house. The PV system was designed using the SolarSim simulation software. Based on these simulations, a system with 32 1.3-m2 collectors, facing due South at a tilt angle of 33 degrees from the horizon was selected. The collectors have a 165 W rating at 47 C normal operating temperature. Based on this simulation, PV system output is estimated to be about 6,577 kWh per year (Figure 3). The figure shows simulated monthly Eco-house PV production and simulated monthly energy use. One can see that the PV system will produce more electricity than the house needs during most of the year and drop off significantly in the winter. Although the Eco House is designed to use no energy on a net basis over the year, the cost of energy for the house will not be zero. Current Ohio Law mandates that electric utilities install a single meter to measure the net amount of electricity used (or generated) by the house each month. However, the law permits utilities to sell electricity for the standard residential rate and purchase electricity for their lowest

176

Page 199: nasa / ohio space grant consortium 2005-2006 annual student

avoided cost. Using current prices and rates, DP&L will sell electricity for about $0.088 per kWh and purchases it for $0.053 per kWh. Methodology: Whole House Cost Benefit Analysis Fuel Escalation Rate In order to predict the energy cost escalation, we examine local historical energy costs. The costs are adjusted with the implicit price deflator to reflect their real cost in 2000 $US (EIA 2000). Historical electricity prices from the Dayton area were obtained from actual bills for a resident of Dayton. Readings were selected from the same time of year to minimize seasonal fluctuations in energy prices. Adjusted to 2000 $US, the price of electricity in September, 1995 was $0.101 per kWh. The price of electricity in September, 2005 was $0.093 per kWh. Between 1995 and 2005, local electricity prices have decreased at a rate of 1.16% annually in the Dayton area. Unit costs of natural gas were read for winter months since the majority of natural gas use occurs during these months. Adjusted to 2000 dollars, the price of natural gas in January, 1996 was $0.44 per ccf. The price of natural gas in January, 2005 was $1.03 per ccf. Between 1996 and 2005, natural gas prices have increased at a rate of 7.97% annually in the Dayton area. In 2004, electricity and natural gas were 67% and 33% of all energy costs. Thus, locally, between 1995 and 2005, the weighted fuel escalation rate was 1.85%. Since deregulation of electrical utility companies beginning in 2001, a fix has been placed on the residential cost of electricity for the next 5 years (EERE). Beginning in 2006, it is expected that electricity rates will increase in a similar fashion to natural gas prices. Considering the fact that, locally, energy escalation rates have been about 1.85% between 1995 and 2005, we bracket the study with energy escalation rates between 1% and 4% annually over the 35 year economic lifetime of the Eco-house. The magnitude of annual growth rates can be visualized by applying the rule of seventy, which states that the doubling time is approximately equal to the ratio of 70 and the annual rate of increase. Thus, a 1% annual increase corresponds to a doubling of the real cost of energy every 70 years. A 4% increase corresponds to a doubling every 18 years. Baseline and Eco-house Energy Costs Annual energy costs for the old baseline, new baseline and Eco-house are calculated for the bracketed annual fuel escalation rates of 1% and 4% over the 35 year lifetime that the University allots to a house. As seen in Figure 4, the cost of operating the old baseline and new baseline houses varies significantly more than the cost of operating the Eco-house. This is seen in the flatness of the annual energy costs for the Eco-house and the steepness of the annual energy costs for new construction. Energy costs are subject to a high degree of variability and cannot be determined by the university. Owning and Energy costs for New Baseline In 2005, a typical new 5-student house costs $225,000 fully-furnished with appliances, painting and carpeting. To finance the house, the University of Dayton typically borrows $225,000 at a 5% interest rate over a 20-year period. The economic lifetime of the house is 35 years. The largest single operating cost associated with student housing is the cost of energy. The annual energy cost for new baseline houses is $1,940. Owning and Energy Costs for Eco-house The Eco-house design is similar in size and layout to the new baseline houses currently constructed at UD. Thus, the cost of construction of the Eco-house is the added cost of Eco-house components and cost of construction of the baseline house. A significant portion of the additional costs of constructing the UD Eco-house are associated with the solar PV, solar hot water systems, and data monitoring. The predicted net additional cost is $46,657. Not every change in the design of the new baseline house is more expensive. Some changes eliminate traditional systems and save on the construction cost. For example, in the replacement of a traditional furnace and air conditioner, with an air to air heat pump, the costs of the furnace and air conditioner are eliminated. Also, the sole use of electricity eliminates the need for a natural gas hook up.

177

Page 200: nasa / ohio space grant consortium 2005-2006 annual student

The present value of owning costs of the Eco-house would be about $271,657. Current annual energy costs of the UD Eco-house will be $29 per year. Results Obtained ESim estimates that total Eco-house electricity use, including space conditioning and hot water heating, will be about 6,500 kWh per year (PV system output is estimated to be about 6,577 kWh per year). Annual electricity use in the newer baseline house is 13,455 kWh per year, and annual electricity use in the older baseline house is 15,581 kWh per year. Thus, the Eco-house will use about 52% less electricity than the new baseline house, and about 58% less electricity than the old baseline house. The Eco-house will use no natural gas, compared to 61 mmBtu per year for the newer baseline house and 163 mmBtu per year for the older baseline house. Monthly electricity and natural gas use of all three houses are shown in Figures 9 and 10 in “Conceptual Design of a Net Zero Energy Campus Residence” (Mertz et al., 2005). At an annual fuel escalation rate of 1%, the present values of owning and energy costs are $261,212 and $272,196 for the new baseline and Eco-house, respectively. At an annual fuel escalation rate of 4%, the present values of owning and energy costs are $280,471 and $272,482 for the new baseline and Eco-house, respectively. At an annual fuel escalation rate of 1%, the Eco-house is less cost-effective than the new baseline. However, at an annual fuel escalation rate of 4%, the Eco-house is more cost-effective than the new baseline. The current cost of natural gas in the Dayton area is $12.50 per mmBtu and the cost of electricity is about $0.088 per kWh. Assuming the total efficiency of the electrical generation and distribution is 30%, the total source energy savings from the Eco-house compared to the old baseline house and new baseline house will be about 340 mmBtu per year, and 214 mmBtu per year, respectively. Assuming 2.3 lbs CO2 per kWh of electricity (NRDC, 1998) and 113 lbs CO2 per mmBtu of natural gas, total CO2 emissions for the old and new baseline houses are about 53,713 lbs per year and 37,964 lbs CO2 per year, respectively. The Eco-house will generate no net CO2 emissions. Significance and Interpretation of Results In addition to supporting the university’s commitment to sustainability and environmental stewardship, the Eco-house would be cost-effective to build and operate over a life-cycle of 35 years. If the Eco-house is operated for more than 35 years, additional energy savings will be accrued. Additional externalities such as publicity for the University and attraction of new students could easily add to the cost effectiveness of the Eco-house. Further, the Eco-house will provide a living-learning community where students will be encouraged to study the effect of technological improvements and occupant behavior on energy consumption. Occupants will monitor building performance through instantaneous monitoring equipment such as thermocouples, humidity sensors and power meters. Additional Publications This report heavily summarizes the work that has been done on the UD Eco-house project to date. Additional, more detailed information may be found in Conceptual Design of Net Zero Energy Campus Residence (Mertz et al., 2005) and this summer in "Cost-Benefit Analysis of Net Zero Energy Campus Residence".

178

Page 201: nasa / ohio space grant consortium 2005-2006 annual student

Figures/Charts

Figure 1. Baseline Electricity Use. Figure 2. Baseline Natural Gas Use.

PV Generation and Eco-house Requirements

0

100

200

300

400

500

600

700

800

Jan-04 Feb-04 Mar-04 Apr-04 May-04 Jun-04 Jul-04 Aug-04 Sep-04 Oct-04 Nov-04 Dec-04

Ener

gy (k

Wh/

mo)

Monthly PV Generation (kWh/mo)Eco-house Monthly Elec Use (kWh/mo)

Old Baseline, New Baseline, and Eco-house Real Future Energy Cost Comparison

$0

$2,000

$4,000

$6,000

$8,000

$10,000

$12,000

$14,000

1990 2000 2010 2020 2030 2040

Ann

ual E

nerg

y C

osts

of a

5-s

tude

nt h

ouse

Old Baseline (e=1%)Old Baseline (e=4%)New Baseline (e=1%)New Baseline (e=4%)Eco-house (e=1%)Eco-house (e=4%)

Figure 3. Elec. Generation and Requirements. Figure 4. Real Future Energy Cost Comparison. Acknowledgments George Mertz: George Mertz, my design partner, has done just as much work as I have on this project, if not more. He has taught me with his vast knowledge of economics, building energy, and sustainability. He is more of a mentor than a design partner. Dr. Kelly Kissock: I thank our advisor/teacher/mentor/friend Dr. Kelly Kissock for the countless hours that he helped us work on this and many other projects. He has been an invaluable part of the UD Eco-house project and without him, we would have never come so far. Dr. Kevin Hallinan: Dr. Hallinan has been a close advisor during my entire undergraduate and graduate career and has provided me with support in countless areas of my education. John Seryak, Chris Schmidt, Carl W. Eger III, Kevin Carpenter, and the entire University of Dayton Industrial Assessment Center: The IAC staff has been influential in supporting the philosophy of sustainability at the University of Dayton and has graciously lended their time to assist us with the technical analysis of energy use. It is on the shoulder of their solid technical writing and analysis that this Eco-house project stands.

179

Page 202: nasa / ohio space grant consortium 2005-2006 annual student

References 1. American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), 2001.

ASHRAE Handbook, Fundamentals. 2. Boyle, Godfrey (ed), 2004, “Renewable Energy: Power for a Sustainable Future”, 2nd Ed, Oxford UP. 3. Christian, J., 2004, “The First Attempt at Affordable Zero Energy Houses”, Oak Ridge National

Laboratory, Oak Ridge, Tennessee. www.ornl.org. 4. Energy Information Administration (EIA), 2000, “EIA Annual Report of World Energy

Consumption”. www.eia.doe.gov. 5. EERE, Ohio Utility Deregulation Info 6. http://www.eere.energy.gov/femp/program/utility/utilityman_elec_oh.cfm 7. Kissock, K., 1997. “SolarSim Solar Energy Simulation Software”, University of Dayton, Dayton,

Ohio. 8. Kissock, K., Bader, W. and Hallinan, K., 2001, “Energy and Waste Reduction Opportunities in

Industrial Processes”, Journal of Strategic Planning for Energy and Environment, Association of Energy Engineers, Vol. 21, No. 1.

9. Kissock, K., 2004, “Heating and Air Conditioning Student Projects”, (http://www.engr.udayton.edu/faculty/jkissock/http/HAC).

10. Mertz, G. A., Raffio, G. S., Kissock, K., Hallinan, K. P., 2005a. “Proceedings of the ISEC2005 International Solar Energy Conference”, August 6-12, Orlando, Florida.

11. Mertz, G. A., Raffio, G. S., Kissock, K., 2005b. “Economic Analysis of the UD Eco-house: A New Era of UD Housing” Technical Report for Eco-house Economic Justification to UD Department of Residential Services and University Chief Financial Officer, Department of Mechanical and Aerospace Engineering, University of Dayton, Dayton, Ohio.

12. National Renewable Energy Laboratory (NREL), 1995, “User’s Manual for TMY2s”, U.S. Department of Energy, NREL/SP-463-7668. http://rredc.nrel.gov/solar/old_data/nsrdb/tmy2/.

13. Raffio, G. S., Mertz, G. A., Paterra, K. J., King, A. S., 2004. “University of Dayton Eco-house Design” Senior Design Project, Department of Mechanical and Aerospace Engineering, University of Dayton, Dayton, Ohio.

14. Raffio, G. S., Mertz, G. A., Kissock, K. “Cost-Benefit Analysis of Net Zero Energy Campus Residence”. Proceedings of the ISEC2006 International Solar Energy Conference. July 13-18, 2006, Denver, Colorado. (Submitted 4-6-06)

15. Seryak, J. and Kissock, K. 2003. “Occupancy and Behavioral Effects on Residential Energy Use” Proceedings of the 2003 American Solar Energy Society Conference. Austin, Texas: American Solar Energy Society.

16. US EPA, 1994. http://www.epa.gov/globalwarming/climate/index.html

180

Page 203: nasa / ohio space grant consortium 2005-2006 annual student

Coalbed Methane Potential in Southeast Ohio

Student Researcher: Turner K. Reisberger

Advisor: Dr. Robert Chase

Marietta College Department of Petroleum Engineering and Geology

Abstract Conventional oil and gas reservoirs are becoming a rare occurrence in North America. Many of these conventional reservoirs have been completely developed or are minimal in size. Therefore, oil and gas companies in the United States have moved to more unconventional sources of petroleum products. One such unconventional source is coal-bed methane (CBM). This paper will detail the mechanisms of a coal-bed methane reservoir and how this source of methane gas is produced. This paper will also look at ways to determine potential for CBM in southeast Ohio. The first step was to model a CBM reservoir using reservoir simulation software by Computer Modeling Group, Inc. A model was built using this simulation software that can be used to simulate any CBM reservoir with known geologic and petrophysical parameters. In conjunction with building the reservoir model, a study was done to determine the ideal drainage patter for producing CBM wells. The second step was using known data about specific Ohio coal beds to calculate an estimate of the original gas-in-place. Reserve values for several coal beds in southeast Ohio were obtained using Monte Carlo simulation. Introduction Coal-bed methane is a source of natural gas that can be found in every coal seam. Methane is considered a nuisance for coal miners, and is usually vented to the atmosphere during coal mining operations. However, many companies have found the methane gas from unmined coal seams to be very profitable. CBM reservoirs are considered an unconventional source of natural gas, much like tight gas sands and gas shales. Therefore, higher costs and more resources are used to produce unconventional reservoirs. However, the potential for methane production is great within these coal-bearing formations. Little is known about CBM potential in southeast Ohio. Production of CBM is extensive in West Virginia and Pennsylvania, but uncertainty about coals in Ohio has resulted in little CBM production from this area. The remainder of this document will discuss the mechanics of CBM reservoirs and how they are produced. Then, the process of modeling a CBM reservoir using reservoir simulation software will be discussed. After building the reservoir model, it was used to determine an ideal drainage pattern for producing CBM wells. Next, a method of calculating reserves of methane for southeast Ohio using existing data will be discussed. Finally, conclusions on the potential for CBM production in southeast Ohio will be addressed. CBM Reservoir Mechanics To understand CBM reservoirs, it is important to know the mechanics of the coal itself. Coals are a naturally fractured rock. Porosity and permeability exist not only in the matrix of the coal, but along fractures known as the cleat system. The cleat system is separated into two types of cleats: face cleats and butt cleats. Face cleats and butt cleats run perpendicular to each other within the reservoir, breaking the coal into many box-like parts. The cleat spacing in coal can vary from 1/10 to more than one inch (Zuber 3.1). The coal itself contains pore space that has a certain porosity and permeability, but these values are much smaller than the porosity and permeability of the cleat system. The total system of matrix rock porosity and fracture porosity is known as a dual-porosity system. Methane within the coal seam can be found in two locations. First, methane can be found within the pore space of the cleat system and matrix. Second, methane can be found attached to the surface of the coal. The process of methane attaching itself to the surface of coal is known as adsorption. The amount of methane that is attached to the coal is the methane content. The methane content is an extremely

181

Page 204: nasa / ohio space grant consortium 2005-2006 annual student

important number for determining the reserves of gas in a CBM reservoir. This number is reported in standard cubic feet per ton of coal (scf/ton). Greater than ninety percent of the gas in a coal seam is adsorbed onto the surface of the coal, while less than ten percent is located within the matrix porosity and cleat system (Zuber 3.2). The ability to adsorb methane allows coal to have a very high storage capacity compared to conventional sandstone reservoirs. A coal bed contains up to ten times the amount of methane that a sandstone reservoir of the same size contains. CBM Production Coal beds are initially filled with water, and a certain pressure is exerted on the coal layer due to the weight of the overlying rocks. This pressure is what keeps the methane adsorbed to the coal surface. Only when pressure decreases can the methane molecules desorb from the coal surface. The relationship of pressure to the methane content can be seen on a Langmuir curve, shown in Figure 1. At high pressures, a decrease in pressure will not significantly affect the methane content of the coal. However, when the pressure reaches a critical desorption pressure, methane will begin to desorb from the coal at a much higher rate.

Langmuir Curve

0

50

100

150

200

250

300

350

400

450

500

0 200 400 600 800 1000 1200 1400 1600 1800 2000

Pressure (psia)

Met

hane

Gas

Con

tent

(scf

/ton)

Critical Desorption Pressure

Figure 1. Langmuir curve showing the decrease in methane gas content of the coal as pressure declines.

To decrease the pressure of the reservoir to the critical desorption pressure, water must be removed from the formation. As water is removed, pressure begins to fall. It may take several months to several years to produce enough water to reach the critical desorption pressure, but once this point is reached, water production declines and gas production increases drastically. CBM Reservoir Simulation For this project, Computer Modeling Group, Inc. (CMG) software was used to develop a model for CBM reservoirs. Reservoir simulations are used to predict the performance of a reservoir using known data. Once a general model is produced for a specific type of reservoir, CBM in this case, data can be input into the model to predict the performance of either one well, a pattern of wells, or an entire field of wells. The CMG software was used to produce a model for a CBM well. Little data is available to make an accurate simulation of a well specifically for southeast Ohio, so no simulation was done for this application. The reservoir model would prove useful in conjunction with a CBM pilot project conducted by a producer in this area.

182

Page 205: nasa / ohio space grant consortium 2005-2006 annual student

Using the model as a comparison tool, it was also possible to simulate different drainage patterns for producing CBM. Two patterns were used for comparison, a single radial flow model, and a five-spot pattern. The goal was to determine which model produces better production results over a ten year period. After running the simulation for both a single well and five-spot pattern, it was found that the five-spot pattern produced a peak gas rate four years earlier than the single drainage pattern. Also, cumulative gas production within a ten-year period for the five-spot pattern was 1.84 times the cumulative gas production from a single well. Gas Reserves Calculations The second part of this project was to determine the gas reserves for coal beds located in southeast Ohio. Data was obtained from a coal resource study done for Belmont, Guernsey, Monroe, Noble, and Washington counties. Data used for the calculation of gas reserves included methane content and total coal resources for the five counties. Table 1 lists the coal formations and their associating methane content and total coal resource estimates. Formation Methane Content (ft3/ton) Total Coal Resources (billion tons) Gas Reserves (Bcf) Upper Freeport 88 1.567 182 Lower Freeport 51 2.101 111 Middle Kittanning 57 3.002 185

Table 1: Estimated gas reserves for coal formations in Belmont, Guernsey, Monroe, Noble, and Washington Counties. Source: Couchot et.al., 1980

Monte Carlo simulations were done on the Upper Freeport, Lower Freeport, and Middle Kittanning coal beds to determine the original gas-in-place for the study area. Table 1 shows the resulting gas reserves from the Monte Carlo simulations. Conclusions The results from the Monte Carlo simulations show that there is a significant amount of CBM available in coal seams in southeast Ohio. CBM has not been extensively developed in this area of the Appalachian Basin. CBM production from neighboring states also suggests that the potential is there for drilling CBM in southeast Ohio. A pilot project would offer significant opportunities to learn more about these coal formations. A pilot project would also help in creating an accurate reservoir model for CBM wells in Ohio using the CMG reservoir simulation software. It is also suggested from this study that a five-spot pattern is ideal for CBM wells versus drilling one well. The shorter drainage time with the five-spot pattern will produce a larger quantity of water over a shorter period of time, leading to larger quantities of gas. Acknowledgments Dr. Bob Chase, Dr. Ben Thomas, Mr. Dave Freeman, Computer Modeling Group Inc., and the Ohio Space Grant Consortium. References 1. Couchot, Michael L., Crowell, Douglas L., Van Horn, Robert G., and Struble, Richard A.

Investigation of the Deep Coal Resources of Portion of Belmont, Guernsey, Monroe, Noble, and Washington Counties, Ohio. State of Ohio Department of Natural Resources Division of Geological Survey. Copyright 1980.

2. Zuber, Michael D. Basic Reservoir Engineering for Coal. CBM Reservoir Engineering Short Course. Fall 2004. Morgantown, WV.

183

Page 206: nasa / ohio space grant consortium 2005-2006 annual student

Vapor Phase Catalytic Ammonia Converter (VPCAR) LabView Programming

Student Researcher: Jeffrey N. Ríos

Advisor: Rochelle May

Case Western Reserve University Mechanical and Aerospace Engineering Department

Abstract The Flight Software Engineering Branch is responsible for embedded software products for onboard data handling; management and control of flight hardware during project testing and development. One of the projects that this branch was contributing to was the Vapor Phase Catalytic Ammonia Converter (VPCAR). This project’s main goal is while undergoing different test and experiments to eventually be able to convert an incoming combined waste stream (urine, condensate, and hygiene) and produce potable water in a single step. I was mostly working either in a lab with the test flight laptop or in my office with the actual pump and LabView software. The rack itself consisted of many components for the VPCAR project, but I was mainly responsible for the pump that was going to be supplying the test unit with water or urine. The pump was a Cole Palmer MasterFlex, and was going to communicate through means of a serial port interface. After going through some obstacles with the pump and software, I wrote a LabView VI that was able to communicate with the pump and execute some basic functions all from the computer. The software was able to turn on or off the pump, control its speed, flow rate, and set default values for the pump. I was also responsible for the Phantom camera that was going to be observing the rotating disk under testing. The camera was also completely controlled from the software and was able to be controlled form the main computer for the project. When everything was placed together into the rack, the test unit was going to be flown on the DC-9 airplane for micro-gravity testing. By the time my internship was over for the summer, I was able to finish the LabView VI for the MasterFlex pump and also finish a VI for the Phantom camera. The software and the hardware were communicating through LabView without any problems. These VI’s were going to be implemented into a larger LabView program that would control other units on the rack, this VI was being implemented by my mentor, Rochelle May, who guided me through my project during the summer and does a great deal of the software issues on the project. Project Objectives My objectives for my project for this past summer were the following: I first learned the MasterFlex pump interfaces and basic configurations to be able to control the pump. I then learned how LabView and the MasterFlex pump were going to communicate with each other. I was then going to write a LabView program that would make it possible for the software to absolutely control the hardware. The LabView sub-VIs were implemented onto a laptop that would use LabView to control major components of the VPCAR system. The pump in its final stage of the project development would be pumping urine into the VPCAR system and testing would undergo on the DC-9. Methodology My Logic behind my project was taken from previous experiences on projects using LabView. After learning how the pump and the software would run in coalescence I was able to create a LabView program that would meet the testing and mission objectives necessary to further the project. I then performed test runs to troubleshoot any bugs and errors in the program. There was also an organized schematic made in order to make it easy to read and troubleshoot. Results My results looked similar to the diagrams in the Figures/Charts Section (Figure 1), which shows a developed control panel and the schematic code window for the project. There were versions after this to meet easier user maneuverability.

184

Page 207: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results The significance of the results are that the two figures show how the software is reading off “COM1” serial port and taking in signals and how the program is able in manipulating the hardware through LabView commands. Figures and Charts

Figure 1. Acknowledgments and References I would like to thank my mentor, Rochelle May, for helping me with my project and my data as well as everyone in the flight software engineering branch. I would also like to thank NASA, OAI, and the Ohio Space Grant Consortium for giving me this opportunity and the unique experience that I will be able to utilize in future endeavors.

185

Page 208: nasa / ohio space grant consortium 2005-2006 annual student

Chemical and Mechanical Stability of Membranes Modified by Ion Beam Irradiation

Student Researcher: Frederick C. Roepcke

Advisor: Dr. Isabel Escobar

The University of Toledo Chemical Engineering Department

The effect of modifying membranes by irradiation with hydrogen ions on membrane performance is being studied here. The objective of the project is to “firm up” the membrane infrastructure since under pressure, membrane pores bend and temporarily change shape to allow large, normally rejected particles to pass through. As the ions penetrate the membrane surface, they lose energy to the membrane polymer and transfer energy to the membrane structure. This energy increase causes the membrane infrastructure to break existing bonds, cross-link internal pores, and form volatile molecules that change the microstructure of the membrane. This increase in internal rigidness of the membrane allows for a more predictable membrane permselectivity. Figure 1 shows a typical infrared spectrum for the virgin sulfonated polysulfone membrane obtained at the room temperature. The infrared spectrum (Fig.1) was consistent with the spectrum of the sulfonated polysulfone reported in the literature, which confirms that the membrane was made of sulfonated polysulfone. Peaks at 1041 cm-1, 1103 cm-1, 1149 cm-1, 1238 cm-1, 1485 cm-1, 2950 cm-1, 3110 cm-1,and 3380 cm-1 correspond to the presence of SO3 (sulphonic), C-O (ether), R-(SO2)-R (sulfone), C-O (ether), C=C (aromatic), CH (aliphatic), CH (aromatic) and OH stretching bonds respectively. Figure 1 also compares the infrared spectrum of the virgin and irradiated membranes membranes, which shows that peak height at 1041 cm-1 was decreased by 19% after irradiation. This decrease was due to the breakage of some of the sulphonic – benzene ring bonds due to the irradiation of the membrane, and cross liking occurs at the free sites. Changes occur only on the surface of the polymer, since ion beam irradiation was only used to modify the surface. When sulphonic bonds are broken, after ion beam irradiation, a positive radical on the benzene ring is formed and H2SO4 is released. The free radical on the benzene ring is hypothesized to bind to a unbroken free sulphonic site to increase cross linking. Thus, due to irradiation, the surface morphology of the polymer was changed and the charge of the membrane was decreased. These two analysis along with the others mentioned earlier provide a more concrete picture on what characteristic changes occurred on the membrane surface, and can determine whether these characteristic changes are any more likely to result on a post modified sulfonated polysulfone membrane than a virgin sulfonated polysulfone membrane. The primary objective of this research is to determine what, if any effect ion beam irradiation has on the mechanical stability of the sulfonated polysulfone membranes as opposed to none irradiated sulfonated polysulfone membranes. The project objectives included coming up with ATR-FTIR figures comparing the differences to irradiated membranes and non-irradiated membranes as well as contact angle measurements for both sets of membranes. It is hoped that the changes in the FTIR figures will also show changes in the contact angle measurements. Other objectives include the production of AFM measurements to determine if the roughness of the membranes were changed, and if the mechanical strength of the membranes were more likely to be affected due ion beam irradiation. In order to determine the stability of modifications, defined as the prevention of any deterioration incurred over time, induced by modification, modified and unmodified membranes were stored in different solutions for prolonged periods of time, and, then, they were characterized to evaluate changes. Modified membrane samples were stored in parallel with virgin membrane samples for prolonged periods of time (i.e., from 1 week up to 3 months) in dry storage (membrane samples only) as well as wet storage (i.e., solutions of pH = 3, 5, 7 and 10, of low ionic strength (DI) and high ionic strength, and of 0, 0.1 and 0.5 M chlorine) to determine if any structural and/or morphological changes occurred, as well as if the modification has been absorbed by the polymer. 186

Page 209: nasa / ohio space grant consortium 2005-2006 annual student

-0.1

0

0.1

0.2

0.3

0.4

0.5

580 780 980 1180 1380 1580 1780 1980

wave Number cm-1

Abso

rban

ce

Decrease in Peak height

Virgin Membrane

Irradiated Membrane

Figure 1. FTIR Analyses Comparing the Virgin and Irradiated Membrane. The FTIR analysis of the irradiated membranes subjected to a high ionic solution are shown in Figure 2.

FTIR Irradiated Membrane High Ionic

700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000

Wavelength

Week 1Week 3Week 4DI

Figure 2. FTIR Analyses of Irradiated Membrane in High Ionic Solution.

The contact angle measurements of the irradiated membranes subjected to a high ionic solution are shown in Figure 3.

Surface Tension Irradiated Membranes (Week 3)

0

10

20

30

40

50

60

70

80

Storage Medium

Angl

e (d

egre

es)

pH3pH5pH7pH10DIHigh Ionic0.1 Cl20.5 Cl2

Figure 3. Contact Angle Measurements for Irradiated Membrane. It can be seen in the FTIR analysis that the irradiated membrane has a peak that is forming in the 1450 wavelength region. This means that a bond has been forming, but it is not known as to what the exact bond formation is. The contact angle measurements show that the irradiated membrane subjected to the high ionic solution does in fact have an effect on the hydrophobic nature of the membrane. The membrane became much more hydrophilic when exposed to the high ionic solution. The next step to this research is to determine if the virgin membranes would have experienced any of these changes, and if they did happen, to what extent would they have changed. Another area that should be further explored is studying the effects of using a varying concentration of high ionic solution. Acknowledgements

• Dr. Isabel Escobar • Dr. Kenneth De Witt • Rama Chennamsetty • Tilak Gullinkala • OSGC

Reference 1. R. Chennamsetty, I. Escobar. Modification by Ion Beam Irradiation and Characterization (2005).

187

Page 210: nasa / ohio space grant consortium 2005-2006 annual student

Aerial Odor Tracking in Three Dimensions

Student Researcher: Adam J. Rutkowski

Advisors: Roger D. Quinn and Mark A. Willis

Case Western Reserve University Department of Mechanical and Aerospace Engineering

Abstract Odor tracking strategies have gained importance with applications to seeking improvised explosive devices and hazardous materials. The approach taken here for developing an odor tracking strategy is to draw inspiration from the pheromone tracking behavior of the tobacco hornworm moth. A successful odor tracking algorithm depends on reliable determination of the wind direction. A method for determining the wind direction using a camera and airspeed sensors onboard an unmanned aerial vehicle was developed and tested in simulation. The absolute error between each component of the true and estimated wind velocity vector had a standard deviation of less than 0.065 m/s for a true wind velocity on the order of 1 m/s. A method of tracking an odor plume was then developed and tested in a separate simulation. The desired motion of the tracking vehicle was decomposed into a component tangential to the wind and a component normal to the wind. In this work, only the normal component of the velocity is controlled and the tracking vehicle remains at a constant distance downwind from the odor source. The turn rate of the normal component of the velocity vector is adjusted to keep the tracking vehicle near the odor plume. The odor tracking strategy directed the tracking vehicle to turn continuously around the odor source in a manner similar to the moths and was capable of adapting to different odor plume geometries. Project Objectives Odor tracking strategies have gained importance with applications to seeking improvised explosive devices and hazardous materials. An odor tracking strategy could also be implemented on an unmanned aerial vehicle with a smoke sensor onboard to patrol a forest for fires. Upon detection of smoke, the vehicle could then locate the fire. This will require implementing a three-dimensional odor tracking strategy. Most odor tracking research considers the two-dimensional odor tracking problem for tracking odors that remain near the surface. However, the need for three-dimensional strategies for tracking odors that travel through the atmosphere or underwater is clear. The approach taken here for developing an odor tracking strategy is to draw inspiration from the pheromone tracking behavior of the tobacco hornworm moth. A male moth locates a female moth (a potential mate) by tracking the pheromone released by the female. To study this behavior in a laboratory wind tunnel, a piece of filter paper containing pheromone is placed at the upwind end of the test section and a male moth is released at the downwind end roughly two meters from the source [1]. The behavior is recorded at 30 Hz with video cameras from overhead and from downwind. Figure 1 shows the downwind position of the moth as a function of time relative to the pheromone plume for a particular case. The horizontal and vertical positions relative to the pheromone source as a function of time are shown in Figure 2. In general, the moth travels upwind while attempting to remain in contact with the pheromone plume. The moth counter-turns across the wind with a fairly regular timing between the turns called the inter-turn duration (ITD). The average ITD for the tobacco hornworm moth is about 550 ms. Typically, only the horizontal behavior is analyzed and the regularity of the horizontal ITD has suggested that the counter-turning behavior is controlled by an inter-turn timer. However, the variability in vertical position is comparable to the horizontal variability and a counter-turning behavior is also observed in this direction. The path of the moth as viewed from downwind is shown in Figure 3. The figure shows how the moth turns continuously in such a way that keeps the moth close to the pheromone plume. The first objective of this research project is to design and test a method for determining the direction of the wind from an aerial vehicle. This is a nontrivial task since the vehicle is not attached the ground. Sensors that respond to the motion of the air when mounted on an aerial vehicle report the airspeed of the

188

Page 211: nasa / ohio space grant consortium 2005-2006 annual student

vehicle, not the wind speed. The direction of the air moving past the vehicle can be quite different from the direction of the wind. Visual information can be fused with airspeed information to determine the wind direction, the motion of the vehicle relative to the ground (egomotion), and the height of the vehicle. This algorithm, henceforth called the aero-optical egomotion estimation algorithm, is an important piece of the odor tracking puzzle that can be implemented with any odor tracking strategy. The second objective is to design and test a new three-dimensional odor tracking strategy. Since the pheromone plume created in the wind tunnel does not change significantly in width as downwind distance from the source increases, it is possible that the regularity of the ITD observed in moths is a function of the plume size and the mass of the moth rather than being directly controlled by an inter-turn timer. The strategy presented here is designed to keep the tracking vehicle near the odor plume without the use of inter-turn timers. Methodology Used Figure 4 shows the scenario for which the aero-optical egomotion estimation algorithm is designed. A vehicle flies over level ground at a variable altitude h(ti), where ti represents the time corresponding to the ith timestep. The vehicle moves with a groundspeed, or velocity relative to the ground, of vg(ti). The vehicle is constrained to rotate only about the y-axis with an angular velocity of ωy. A variable wind is present with vector velocity w(ti) relative to the ground. Sensors onboard the vehicle measure a true airspeed, or velocity of the vehicle relative to the surrounding air, of va(ti). A camera is mounted on the underside of the vehicle and a Cartesian coordinate system is fixed to the camera. Since the vehicle is assumed to not rotate about the x or z axes, the camera remains pointed directly downward at all times. The aero-optical egomotion estimation algorithm is designed to work directly with an optical flow algorithm that calculates the apparent image motion in the x, y, and z directions and rotational image motion about the y-axis as seen by the camera [2][3]. Optical flow is represented by the vector vo(ti). The algorithm uses optical flow and airspeed information to determine an estimate of the height, which is denoted as ih(t ) . Once the height is estimated, an estimate of the wind speed, iˆ (t )w , is obtained using (1).

i i o i a i

ˆˆ (t )= h(t ) (t ) - (t ) ⋅w v v (1)

Notice that if va(ti) and vo(ti) are known, equation (1) cannot be solved for unique values of iˆ (t )w and

ih(t ) , in other words, the equation is underdetermined. By constraining the wind velocity to be smooth over a short time period, iˆ (t )w and ih(t ) can be found using airspeed and optical flow information from the current and previous time steps. Details are provided in [4]. Digitized position and body axis orientation data of a moth tracking a pheromone plume in a wind tunnel were used to test the aero-optical egomotion estimation algorithm in simulation. Velocity was calculated by dividing the differential displacement in successive samples by the sampling time. Optical flow information was then simulated by dividing the velocity by the height of the moth above the floor at that instant and adding 10% white noise. Wind speed data were recorded in an open field using an orthogonal set of acoustic anemometers. The wind speed data was filtered to remove noise then resampled at 30Hz to match the sampling rate of the position data. It is assumed that no wind existed in the y-direction. Airspeed data was simulated by subtracting the wind speed vector from the ground speed vector and adding 10% white noise. The airspeed data was simulated in this manner to challenge the algorithm in conditions of variable wind instead of the constant flow experienced in the wind tunnel. The approach taken in developing the odor tracking algorithm was to decompose the desired motion of the tracking vehicle into a component tangential to the wind and a component normal to the wind as shown in Figure 5. By taking this approach, it can be guaranteed that upwind progress is made toward the odor source, although the motion at any given time may not be directly toward the odor source. The wind, with vector velocity w, carries odor away from the source. The odor is sampled with two odor detectors, one on the left and one on the right. There is a constant separation distance ds between the left and right odor detectors. The plane normal to the wind velocity vector at the current location of the vehicle will be

189

Page 212: nasa / ohio space grant consortium 2005-2006 annual student

referred to as the normal plane. A coordinate system is attached to the normal plane such that the source is located at the origin somewhere upwind of the vehicle. Let vn and vt represent the normal and tangential components respectively of the vehicle velocity vector. In this work, only the normal component of the velocity is controlled and the tracking vehicle remains at a constant distance downwind from the odor source. Further details can be found in [5]. The block diagram in Figure 7 summarizes the odor tracking algorithm. First, the odor signal is binarized with a simple threshold. Next, the source location is estimated as an average of previously visited locations where odor was detected. After the odor source location is updated, the desired turn rate i(t )ψ& of the normal component of the velocity vector is determined. The sign of i(t )ψ& is chosen so that the vehicle turns toward the estimated source location. To keep the tracking vehicle near the odor plume, the algorithm directs the vehicle to turn sharply when odor is not detected, turn moderately when odor is detected by only one odor detector, and turn softly when odor is detected by both odor detectors. Finally the normal component of the velocity vector is calculated from the desired turn rate. The magnitude of the normal component of the velocity is held constant at 500 mm/s. A very simple model of an odor plume was developed for testing the algorithm in simulation. The plume was modeled such that the probability of odor detection at a location in the normal plane follows a bivariate normal distribution as in Figure 6. The black ellipses show contours of equal probability. As the vehicle moves through the normal plane, a uniformly distributed random value between 0 and 1 is selected at each timestep. If this random value is less than the probability of odor detection at the present location, then odor is detected at that location. Otherwise, odor is not detected. In the simulations, we set ds=5 centimeters, which is the approximate antennal separation of the tobacco hornworm moth. Results Obtained Comparisons between the actual and estimated components of the wind speed vector are shown in Figure 8. Comparisons for each component of the ground speed vector are shown in Figure 9. A comparison of the actual height of the moth to the estimated height of the moth as calculated by the aero-optical algorithm is shown in Figure 10. Absolute error statistics for the data in are presented in Table I. An absolute error analysis was used instead of a relative error analysis because the components of wind speed and groundspeed were often near zero. The odor tracking algorithm was tested for three different plume types – a point source plume, a wide plume, and a tall plume. The plume geometry was changed by adjusting the parameters σy and σz according to Table II. The simulation was performed 15 times for each of the three plume types. The initial position of the tracking vehicle was 0.05 meters above the source location. In each case, the simulation was allowed to run for 10 seconds and the sampling and control frequency was set at 30 Hz. The mean ITD for each trial was then determined and recorded. The mean and standard deviation of the mean ITDs for trials of the same plume type were then calculated. The results are presented in Table III. Example simulated tracks are shown in Figure 11 for the point source plume and Figure 12 for the wide plume. These figures show how the tracking vehicle remains near the odor plume while turning continuously. The track in the case of the wide plume was wider than that for the point source plume. Also, the estimated source location approached the true source location in both cases. The corresponding vertical and horizontal positions as a function of time are shown in Figure 13 and Figure 14. The vehicle exhibited a counter-turning behavior in both the horizontal and vertical directions even though the timing of the turns was not explicitly controlled. Significance and Interpretation of Results The odor tracking algorithm developed here is a first step toward a fully three-dimensional odor tracking strategy. The results shown in Table III indicate that the algorithm adapts to different odor plume geometries by generating a track that has a higher ITD in the direction of the largest dimension of the plume cross section. This prevents the tracking vehicle from getting stuck on one side of the plume,

190

Page 213: nasa / ohio space grant consortium 2005-2006 annual student

where it would be more likely to lose contact with the plume. A rule that controls the upwind progress of the tracking vehicle as a function of the odor concentration will later be added to this odor tracking strategy to make it fully three-dimensional. The aero-optical egomotion estimation algorithm was successful at determining the wind direction using a camera and airspeed sensors onboard a simulated unmanned aerial vehicle. The absolute error between each component of the true and estimated wind velocity vector had a standard deviation of less than 0.065 m/s for a true wind velocity on the order of 1 m/s. In the future, the aero-optical egomotion estimation algorithm will be integrated into the odor tracking strategy to create a more general odor tracking strategy that will be implemented on a robotic platform. Figures/Charts

dist

ance

dow

nwin

d fro

m s

ourc

e, x

(m)

time, t (s)

Figure 1. Downwind distance of a male tobacco hornworm moth tracking a pheromone plume as a function of time (downwind is in the negative direction and the pheromone source is at x=0).

time, t (s)

posi

tion

(m)

horizontal, z

vertical, y

horizontal position, z (m)

verti

cal p

ositi

on, y

(m)

Figure 2. Horizontal and vertical position of a male tobacco hornworm moth tracking a pheromone plume as a function of time (pheromone source is at y=z=0).

Figure 3. Downwind view of a male tobacco hornworm moth tracking a pheromone plume (pheromone source is at the origin).

191

Page 214: nasa / ohio space grant consortium 2005-2006 annual student

x

y

z

camera

h

gv

av

w

x

y

z

camera

h

gv

av

w

y

zds

v

vt

vn

odor source

w

odor detectorψ

Figure 4. Schematic of the relationship between ground speed (vg), airspeed (va), and wind speed (w).

Figure 5. Parameterization of the velocity vector used for the odor tracking algorithm.

Figure 6. Probability of detecting odor as a function of position in the normal plane.

1. Sample the Odorright i

left i

1 if right antenna odor signal > thresholdc (t )

0 otherwise

1 if left antenna odor signal > thresholdc (t )

0 otherwise

⎧= ⎨⎩⎧

= ⎨⎩

2. Estimate the Source Locationsum i sum i 1 right i left i

sum i 1 s i 1 left i left i right i right is i

sum i

sum i 1 s i 1 left i left i right i right is i

sum i

c (t ) c (t ) c (t ) c (t )ˆc (t )z (t ) c (t )z (t ) c (t )z (t )

z (t )c (t )

ˆc (t )y (t ) c (t )y (t ) c (t )y (t )y (t )

c (t )

− −

− −

= + +

+ +=

+ +=

i i 1 i i i 1

mmn i i s

i

(t ) (t ) (t )(t t )0

(t ) 500 sin( (t )) cos( (t ))

− −ψ = ψ +ψ −

⎡ ⎤⎢ ⎥= ψ⎢ ⎥⎢ ⎥ψ⎣ ⎦

v

&

4. Calculate Desired Normal Velocityrad

i right i left is(t ) 5 if c (t )+c (t )=0ψ =&

radi right i left is(t ) 2 if c (t )+c (t )=1ψ =&

radi right i left is(t ) 1 if c (t )+c (t )=2ψ =&

i n i 1 s i i

s i i

0ˆsign( (t )) sign (t ) y (t ) y(t )z (t ) z(t )

⎛ ⎞⎡ ⎤⎜ ⎟⎢ ⎥ψ = × −⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥−⎣ ⎦⎝ ⎠

v&

Adjust turn rate normal to the wind ( )i(t )ψ&

Turn sharply if neither antenna detects odor

Turn moderately if one antenna detects odor

Turn softly if both antenna detect odor

Turn toward estimated source location

3. Calculate Desired Turn Rate

Figure 7. The odor tracking algorithm follows these steps at each timestep. The position of the left odor detector in the normal plane is (zleft, yleft) and the position of the right odor detector is (zright, yright). The parameters

s iz (t ) and s iy (t ) represent an estimate of the horizontal and vertical components respectively of the odor source location in the normal plane at time ti.

192

Page 215: nasa / ohio space grant consortium 2005-2006 annual student

win

d sp

eed

vect

or c

ompo

nent

s x-co

mpo

nent

wx

(m/s

)y-

com

pone

ntw

y(m

/s)

z-co

mpo

nent

wz

(m/s

)

time, t (s)

x-co

mpo

nent

v gx

(m/s

)y-

com

pone

ntv g

y(m

/s)

z-co

mpo

nent

v gz(m

/s)gr

ound

spe

ed v

ecto

r com

pone

nts

time, t (s) Figure 8. Comparisons of each component of the true wind speed vector to the estimates obtained using the aero-optical egomotion estimation algorithm.

Figure 9. Comparisons of each component of the true ground speed vector to the estimates obtained using the aero-optical egomotion estimation algorithm.

TABLE I. ERROR STATISTICS OF THE AERO-OPTICAL EGOMOTION ESTIMATION ALGORITHM .

mean standard deviation

ˆh h− 0.0025 m 0.0454 m x xˆw w− 0.0087 m/s 0.0618 m/s

y yˆw w− 0.0030 m/s 0.0572 m/s

z zˆw w− 0.0034 m/s 0.0535 m/s

gx gxˆv v− 0.0030 m/s 0.0425 m/s

gy gyˆv v− 0.00007 m/s 0.0428 m/s

Figure 10. Comparison of the actual height of the moth to the estimated height as calculated by our aero-optical algorithm.

gz gzˆv v− 0.00004 m/s 0.0455 m/s

TABLE II. PLUME PARAMETERS USED FOR ODOR TRACKING SIMULATIONS.

TABLE III. MEANS AND STANDARD DEVIATIONS OF THE MEAN

INTER-TURN DURATION (ITD) OF THE 15 SIMULATED TRACKS FOR EACH ODOR PLUME TYPE.

Point Source

Wide Source

Tall Source

plume type Horizontal ITD (milliseconds)

Vertical ITD (milliseconds)

P0 0.64 0.64 0.64 point 810±20 810±30 σx 0.1 0.2 0.1 wide 930±80 890±30 σy 0.1 0.1 0.2 tall 900±20 940±60

193

Page 216: nasa / ohio space grant consortium 2005-2006 annual student

ve

rtica

l pos

ition

, y (m

)

horizontal position, z (m)

verti

cal p

ositi

on, y

(m)

horizontal position, z (m) Figure 11. Simulation of tracking a point source plume in a plane normal to the wind direction.

Figure 12. Simulation of tracking a wide plume in a plane normal to the wind direction.

posi

tion

(m)

time, t (s)

horizontal, z

vertical , y

posi

tion

(m)

time, t (s)

horizontal, z

vertical, y

Figure 13. Vertical and horizontal components of the position of the vehicle tracking a point source plume.

Figure 14. Vertical and horizontal components of the position of the vehicle tracking a wide plume.

Acknowledgments I thank the Ohio Space Grant Consortium for their financial support. I thank my advisors, Professor Roger Quinn and Professor Mark Willis for their guidance. I also thank everyone in the Biorobotics Laboratory and the Willis Laboratory for all the insights they have provided. References [1]. A. E. Arbas, M. A. Willis, and R. Kanzaki, “Organization of goal-oriented locomotion:

Pheomone-modulated flight behavior of moths,” Biological neural networks in invertebrate neuroethology and robotics, San Diego: Academic Press, 1993.

[2]. M. V. Srinivasan, “An image-interpolation technique for the computation of optical flow and egomotion”, Biological Cybernetics, Vol. 71, 1994.

[3]. M. G. Nagle and M. V. Srinivasan, “Structure from motion: determining the range and orientation of surfaces by image interpolation”, Journal of the Optical Society of America A, Vol. 13, No. 1, January 1996

[4]. Rutkowski, A., Quinn, R., Willis, M., “Biologically Inspired Self-motion Estimation using the Fusion of Airspeed and Optical Flow”, 2006 American Controls Conference, Minneapolis, MN, USA, June 2006.

[5]. Rutkowski, A., Willis, M., Quinn, R., “Simulated Odor Tracking in a Plane Normal to the Wind Direction”, IEEE 2006 International Conference on Robotics and Automation (ICRA), Orlando, FL, USA, May 2006.

194

Page 217: nasa / ohio space grant consortium 2005-2006 annual student

Potato Projectile Motion

Student Researcher: Joseph J. Scavuzzo

Advisor: Dr. Paul C. Lam

The University of Akron Secondary Education Chemistry/Physics

Abstract The OSGC Student Research Project Symposium has given me the opportunity to create a lesson plan that will spark interest in the minds of students. I hope to excite kids about science, where it has gotten us, and where I will take us. The lesson I chose to do will cover the basics of projectile motion, and kinetic energy in ideal situations by analyzing a potato shoot out of a potato launcher. This project will give students real hands-on experience with analyzing projectile motion. Project Objectives The objectives for my project are not to have students memorize formulas and constants, but to have them grasp some important physical concepts. First, students will learn about vector forces. They will learn how multiple forces acting on a projectile will come together and result in what they see as projectile motion. The second thing students will learn about is kinetic energy. They will see how and to what degree the kinetic energy of an object can be changed under certain conditions. The last objective is for students to gain a perspective on escape velocity and how much energy it takes to get something out of the Earth, or the Moon’s atmosphere. Methodology Used After instruction the students will, as a class, take measurements of the potatoes mass, horizontal distance traveled, and the time the potato is in the air. They will then be broken into small groups of three or four, and work on a worksheet that will aid them in finding several things. The first thing they will find is the velocity of the potato the instant it is fired from the gun. Next, they will find the kinetic energy produced by the potato launcher. Last, they will find the kinetic energy needed to shoot a potato out of the Earth or the Moon's atmosphere. Figures/Chart

θ = 45º

y0

y

195

Page 218: nasa / ohio space grant consortium 2005-2006 annual student

Name: Joey Scavuzzo Subject Area: Physics Grade Level: 12 Lesson Topic: Potato Launcher Time Allocation: 2-3 class periods Instructional Goals:

Using a problem solving approach, knowledge of projectile motion, and kinetic energy, students will be able to analyze projectile motion.

Learning Objectives: 1) Students will be able to calculate the vertical and horizontal components of projectile motion. 2) Students gain a basic understanding of how to add vectors. 3) Students will be able to calculate the kinetic energy of a projectile. 4) Students will be able apply kinetic energy to escape velocity.

Standards: Science: D. Apply principles of forces and motion to mathematically analyze, describe, and predict the net effects on objects or systems.

Grouping of Students: Whole Class – Introduction to the potato launcher/ measurements Small Groups (3 or 4) students – Data analysis

Materials: 1) Each student: Worksheet 2) Potato launcher 3) Tape measure 4) Stopwatch

Prior Knowledge Needed: 1) Students should know how to plug the data taken into the provided formulas. 2) Students should know how to do basic algebra. 3) Students should know how to take measurements.

Procedures: Instructional Strategies: 1. The teacher will lead a discussion about what the kids know about projectile motion. Questions: i. Have you ever seen a cannon or piece of artillery fire? ii. How fast do you think the projectile is going? iii. How far do you think the projectile can go? iv. Do you think it has enough kinetic energy to leave Earth’s atmosphere? v. What about the Moon? 2. The teacher will fire the launcher. Learner Activities: 1. Students will measure the mass of the potatoes. 2. Students will measure the distance the potatoes travel. 3. Students will measure the time the potato is in the air. 4. Students will fill out the Potato Projectile Packet.

Addressing Diversity: Learning Modalities: Auditory Learners- Students will be part of a discussion with the teacher at the start of the project. They will also take part in discussion with classmates in their small groups. Visual Learners- Students will actually be watching the launcher launch projectiles; this will provide them with a visual example of projectile motion. Kinesthetic/Tactile Learners- Students will be taking a hands-on approach to taking the measurements of the projectile. Special Accommodations: Students with physical restrictions will be able to take part in the group activity and the measurements of the projectile.

Assessment/s: Before instruction: The discussion with the teacher before the project will help the teacher to see how much the students know about projectile motion. During instruction: Informal assessment, the teacher will monitor student progress during the project. After instruction: Students will be assessed based on there Potato Projectile Packets.

196

Page 219: nasa / ohio space grant consortium 2005-2006 annual student

Modeling the Reliability of Existing Software Using Static Analysis

Student Researcher: Walter W. Schilling, Jr.

Advisor: Dr. Mansoor Alam

The University of Toledo Department of Electrical Engineering and Computer Science

Introduction In the past, software failure often was regarded as an unavoidable nuisance. Increased reliance upon software has begun to change this attitude, and software reliability is beginning to be significantly considered in each new product development. A 2002 study by the National Institute of Standards found that software defects cost the American economy approximately $59.5 billion annually [1]. The anecdotal evidence supporting this statement is numerous. A single automotive software error led to a recall of 2.2 million vehicles and expenses in excess of $20 million [2]. 79% of medical device recalls can be attributed to software defects [3]. It is reported that software driven outages exceed hardware outages by a factor of ten [4]. The quantity of embedded software is doubling every 18 months [5]. Traditional software reliability models require significant data collection during development and testing, including the operational time between failures, the severity of the failures, and other metrics. This data is then applied to the project to determine if adequate software reliability has been achieved. While these methods have been applied successfully to many projects, there are often occasions where the failure data has not been collected in an adequate fashion to obtain relevant results. This is often the case when using software which has been developed previously or purchasing COTS components. In the reuse scenario, the development data may have been lost or never collected. With COTS software, the requisite development data may be proprietary and unavailable to subsequent software engineers. This poses a dilemma for a software engineer wishing to reuse a piece of software or purchase a piece of software from a vendor. Internal standards for release vary greatly, and from the outside, it is impossible to know the reliability of a given package. One company might release early on the reliability curve, resulting in more failures occurring in the field, whereas another company might release later in the curve, resulting in fewer field defects. As software does not suffer from age related failure, all faults which lead to failure are present when the software is released. In a purely theoretical sense, if all faults can be detected in the released software, and these faults are then assigned a probability of manifestation during software operation, an estimation of the software reliability can be obtained. The difficulty for this theoretical concept, is reliably detecting the software faults and assigning the appropriate failure probabilities. It is a known fact that that it is impossible to prove that a computer program is correct, as this is an instance of the unsolvable Halting problem [6]. However, while not perfect and guaranteed, static analysis has been shown to be practical and extremely effective at detecting faults. It is believed that it is possible to develop a reliability model based upon static analysis, limited testing, and Bayesian Belief Networks.

Static Analysis Tools and Techniques Static analysis of source code is a technique commonly used during implementation and review to detect software implementation errors. Similar in behavior to a spell checker or grammar checker in a word processor, static analysis tools detect faults within source code modules. Static analysis has been shown to reduce software defects by a factor of six [7], as well as detect 60% of post-release failures [8]. Static analysis can detect errors such as buffer overflows and security vulnerabilities [9] [10], memory leaks [11], timing anomalies [12], as well as other common programming mistakes, 40% of which will eventually manifest themselves as a field failure [8]. Table I contains information regarding recent failures which could have been prevented if the appropriate static analysis tools had been applied.

197

Page 220: nasa / ohio space grant consortium 2005-2006 annual student

TABLE I. Software Failure Root Cause Descriptions Failed System Date of Event and Causes Statically

Detectable Air Traffic Communications

Sept. 14, 2004. Failure to reboot Communications System. Reboot required by defective usage of GetTickCount() API call in Win32 Subsystem

Possibly

Mars Spirit Rover

Jan 21, 2004. File System Expanded beyond available RAM Yes

2003 US Blackout

Aug. 14, 2003. Race condition within event handling system, allowing two threads to simultaneously write to a data structure

Yes

Comair Fleet Grounding

Dec. 25, 2004. Hard-coded limit 32,000 crew changes per month exceeded. Yes

Patriot Missile Failure

Feb. 25, 1991. Error induced in algorithmic computation due to rounding of decimal number.

Yes

STS–2 October, 1981. Uninitialized code jump Yes Ariane 5 June 4, 1996. Unhanded exception raised by variable overflow in typecast,

resulting in computer shutdown. Yes

Milstar 3 April 30, 1999. Improper filter coefficient entered into configuration tables. Possibly Mars Path Finder Sept. 27, 1997. Priority Inversion between tasks resulted in reset of CPU. Possibly Mars Climate Orbiter

Dec. 11, 1998. Improper conversion between metric and English units within Navigational Model.

Possibly

USS Yorktown Sept. 21, 1997. Improper user input resulted in division by zero. This caused cascading error which shutdown the ships propulsion system.

Yes

Clementine May 7, 1994. Software error resulted in stuck open thruster. Possibly GeoSat Feb. 10, 1998. Improper sign in telemetry table resulted in momentum and

torque being applied in the wrong direction. Possibly

Near Earth Asteroid Rendezvous Spacecraft

Feb. 17, 1996. Transient lateral acceleration of the craft exceeded firmware threshold, resulting in shutdown of thruster. Craft enter safe mode, but race conditions in software occurred, leading to unnecessary thruster firing and a loss of fuel.

Possibly

Telephone Signal Transfer Point Outages

June 10 – July 2, 1991. Incorrect Binary code in SS7 System Possibly

AT&T Long Distance Outages

Jan 15, 1990. Missing break statement in case statement Yes

Static analysis of source code does not represent new technology. Static analysis is routinely used in mission critical source code development, such as aircraft [13] and rail transit [14] areas. Robert Glass reports that static analysis can remove upwards of 91% of errors within source code [15] [16]. Richardson [17] and Giessen [18] provides an overview of the concept of static analysis, including the philosophy and practical issues related to static analysis. Nagappan et al. [19] discusses the application of static analysis to a large scale industrial project. Recent static analysis research has shown a statistically significant relationship between the faults detected during automated inspection and the actual number of field failures occurring in a specific product [19]. Code coverage obtained during testing and the failure rate observed have also been correlated [20]. Static analysis tools have two characteristics of note, soundness and completeness. A static analysis tool is defined to be sound if it detects all faults present within a given source code module. A static analysis tool is deemed to be complete if it never gives a spurious warning. A static analysis tool is said to generate a false positive if a spurious warning is detected within source code. A static analysis tool is said to generate a false negative if a fault is missed during analysis. In practice, nearly all static analysis tools are unsound and not complete, as most tools generate false positives and false negatives [11]. For all of the advantages of static analysis tools, there have been very few independent comparison studies between tools. Rutar et al. [21] compares the results of using Findbugs, JLint, and PMD tools on

198

Page 221: nasa / ohio space grant consortium 2005-2006 annual student

TABLE II. Relationship between faults and failures in different models Source Model ANSI / IEEE 729-1983 error → fault → failure Fenton error → fault → failure Shooman fault → error → failure IEC 1508 fault → error → failure Hatton error → fault or defect or bug → failure

Java source code. Lu et al. [22] propose a set of benchmarks for benchmarking bug detection tools, but the paper does not specifically include static analysis tools.

Proposed Software Reliability Model The development of software is a labor intensive process, and as such, programmers make mistakes, resulting in faults being injected during development into each and every software product. The rate varies for each engineer, the implementation language chosen, and the Software Development process chosen. These injected defects are removed through the software development process, principally through review and testing. It is often the case that the terms fault and failure are used interchangeably. This is incorrect, as each term has a distinct and specific meaning. Unfortunately, sources are not in agreement as to their relationship. Different models for this relationship are shown in Table II. For the purposes of this article, a fault represents a software defect injected by a software engineer during software development. It represents a static property of the source code. Faults are initiated through a software developer making an error, either through omission or other developer action. This definition set most closely follows that of Hatton and Fenton.

A failure is a dynamic property and represents an unexpected departure of the software package from expected operational characteristics. A failure can be attributable to one or more faults within the software package, and not all faults will result in a failure over a given period of time, as is shown graphically in Figure 1. Any fault can potentially cause the failure of a software package. However, the probability for a fault manifesting itself as a failure is not uniform. Adams [23] indicates that, on average, one third of all software faults manifest themselves as a failure once every 5000 executable years, and only two percent of all faults lead to a MTTF of less than 50 years. Downtime is not evenly distributed either, as 90% of the downtime comes from less than 10% of the faults. From this behavior, it follows that finding and removing a large number of defects does not necessarily yield a high reliability product. Instead, it is important to focus on the faults that have a short MTTF associated with them.

What Causes a Fault to Become a Failure In order to use static analysis for reliability prediction, it is important to understand what causes these faults to become failures. In reliability growth modeling, one of the most important parameters is known

All FaultsFaults which fail.

Figure 1. The Relationship between Faults and Failures.

199

Page 222: nasa / ohio space grant consortium 2005-2006 annual student

as the fault exposure ratio (FER). The parameter represents the average detectability of a fault within software, or in other words, the relationship between faults and failures. There are many reasons why a fault lays dormant and does not manifest itself as a failure. The first, and most obvious, deals with code coverage. If a fault does not execute, it can not fail. While this is intuitively obvious, determining if a fault can be executed can be quite complicated and require significant analysis. The first and most important starting point for determining if a fault is to become a failure is related to source code coverage. If a fault is never encountered during execution, it can not become a failure. In many software systems, especially embedded systems, the percentage of code which routinely executes is actually quite small, and the majority of the execution time is spent covering the same lines over and over again. Embedded systems are also often designed with a few repetitive tasks that execution periodically at similar rates. Thus, with limited testing covering the normal use cases for the system, information about the “normal” execution path through the module can be obtained. There are many different metrics and measurements associated with code coverage. Kaner [24] lists 101 different coverage metrics that are available. Statement Coverage measures whether each executable statement is encountered. Block coverage is an extension of statement coverage excepting that the unit of code is a sequence of non-branching statements. Decision Coverage reports whether Boolean expressions tested in control structures (such as the if-statement, switch statement, exception handlers, interrupt handlers and while-statement) evaluated to both true and false. Condition Coverage reports whether every possible combination of Boolean sub-expressions occurred. Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. Path Coverage reports whether each of the possible paths in each function have been followed. Data Flow Coverage, a variation of path coverage, considers the sub-paths from variable assignments to subsequent references of the variables. Function Coverage measure reports whether each function or procedure was executed and is useful during preliminary testing to assure at least some coverage in all areas of the software. Call Coverage reports whether each function call has been made. Loop Coverage measures whether each loop body is executed zero times, exactly once, and more than once (consecutively). Race Coverage reports whether multiple threads execute the same code at the same time and is used to detect failure to synchronize access to resources. There has been significant study of the relationship between code coverage and the resulting reliability of the source code. Garg [25] and Del Frate [26] indicate that there is a strong correlation between code coverage obtained during testing and software reliability, especially in larger programs. The exact extent of this relationship, however, is unknown. Marick [27] cites some of the misuses for code coverage metrics. A certain level of code coverage is often mandated by the software development process when evaluating the effectiveness of the testing phase. This level is often varied. Extreme Programming advocates endorse 100% method coverage in order to ensure that all methods are invoked at least once, though there are also exceptions given for small functions which are smaller than the test cases [28]. Piwowarski, Ohba, and Caruso [29] indicate that 50% statement coverage is insufficient to exercise the module, 70% statement coverage is necessary to ensure sufficient test case coverage, and beyond 70%-80% is not cost effective.

The Static Analysis Premise The fundamental premise of this software reliability model is that the resulting software reliability can be related to the number of statically detectable faults present within the source code, the number of paths which lead to the execution of the statically detectable faults, and the rate of execution of each path within the software package. To model reliability, the source code will first be divided into statement blocks. A statement block represents a contiguous set of source code instructions which are uninterrupted by a conditional statement. By using this organization, the source code is translated into a set of blocks connected by decisions. Detectable faults can then be partitioned into the appropriate block, simplifying the model.

200

Page 223: nasa / ohio space grant consortium 2005-2006 annual student

Once the source code has been decomposed into blocks, the output from the appropriate static analysis tool is linked into the decomposed source code. By doing this, the a reliability for each block can be assigned based upon the statically detectable faults. For a block which has a single statically detectable fault, the reliability of that block can be expressed as

fblock pR −= 1 Equation 1 where

)( ifpff ppp ⋅= Equation 2

with fpp represents the probability of a false positive static analysis detection occurring and ifp represents the probability of an immediate failure for the fault occurring. If there is more than one statically detectable fault within a given block, then

∑=

−=n

iifblock pR

1,1 Equation 3

expresses the reliability for the block. The two parameters which affect a fault becoming a failure have been carefully chosen based upon the nature of static analysis tools. For each statically detectable fault, there is a set of variables which affect the probability of the fault becoming a failure. Every static analysis tool in existence generates false positive fault warnings; this percentage varies significantly depending upon the program and the tool being used. Thus, the first variable on a fault becoming a failure is the probability that the detected fault is valid. Once a statically detectable fault is known to be valid, the second parameter effects how probable that fault is of manifesting itself as a failure. This will vary based upon the exact statically detectable fault, the data range necessary to exploit the given fault, and other factors. There are other more advanced mechanisms for dealing with false alarms in static analysis outputs, including Bayesian Statistical Post Analysis advocated by Jung et al. [30] [31], Historical profiling advocated by Williams and Hollingsworth [32] [33], and Z-Ranking advocated by Kremenek and Engler [34]. For this experiment, when possible, fpp will ideally be selected based upon historical information and historical statistics, such as those published by Sullivan and Chillarege [35], Hovemeyer and Pugh [36], Artho and Havelund [37], and Wagner et all [38]. If a statement block includes a call to an external method or function, then the reliability of the block will be multiplied by the reliability of the method which is being called. If this number is not available, a default value can be used. If multiple functions are called within a block, then the reliability will be the product of their discrete reliabilities. Once the reliability for each block has been established, the reliability for each decision which leads into a block needs to be established. This is accomplished in the same manner, using statically detectable faults and the same mechanism that is used for source code blocks. The overall reliability for the system will be obtained by using a Bayesian Belief network to compare the reliability of the given paths with the execution frequency for those paths.

Research Plan In order to validate this model, an experiment will be conducted in which a set of software components will be analyzed for statically detectable defects. Limited operational testing in a simulated environment will be performed. Once this step is completed, components will be deployed into an experimental operational environment and monitored for failure. The results of field failures will then be compared with the estimated reliability from the model, allowing validation of the model. Implementation of this reliability model requires the development of an analysis tool, referred to as SOSART (SOftware Static Analysis Reliability Tool). The SOSART tool serves as a bug finding meta tool, automatically combining and correlating statically detectable faults, as well as a reliability assessment tool, calculating software reliability based upon a structural analysis of the given source code,

201

Page 224: nasa / ohio space grant consortium 2005-2006 annual student

the program paths executed during limited testing, and a Bayesian Belief Network. The meta tool aids both in ranking source code based on the number of statically detectable faults within the source code as well as assessing the operation of different static analysis tools for consistency in detecting faults. The reliability tool portion of SOSART aids an engineer in assessing whether or not the existing software module will meet the requisite reliability for the intended application. To enable cross platform operation, the SOSART tool will be developed in Java. Both GUI and command-line driven variants of the tool will be available to allow for both casual usage and automated batch file and script file operation of the tool. Validation of the software development model requires the usage of a mature software component for which the source code is readily available. To demonstrate this technique, the Tempest web server, developed by members of the NASA Glenn Research Center, Flight Software Engineering Branch will be analyzed. Tempest is an embedded real-time HTTP web server, accepting requests from standard browsers running on remote clients and returning HTML files. It is capable of serving Java applets, CORBA, and virtual reality (VRML), audio, video files, etc. NASA uses Tempest for the remote control and monitoring of real, physical systems via inter/intra-nets. Tempest is of commercial quality, fully documented, simple to install and supports simple graphical user interfaces. By its nature, Tempest is an enabling, or platform, technology which will be used to support multiple applications, including space communications, biotechnology, and education. The Java source code for the Tempest web server will first be statically analyzed for statically detectable faults using currently available analysis tools for Java and the SOSART tool. Once the static analysis has been completed, a test website will be developed and served by the Tempest software. For reliability purposes, the Tempest web server will be running on top of a Linux operating system. The Tempest web software will execute in an operating environment with source code coverage being measured. During this time, test clients will periodically connect with the web server in order to verify its correct operation as well as exercise the website. Clients will be custom developed in the Java Language. Multiple clients will be run from multiple domains and locations in order to measure failure rates attributes to network failures versus actual server software failures. Web accesses will also be logged by the Tempest software so that the reliability of the developed verification clients can be measured.

Summary The problem of software reliability is vast and ever growing. As more and more complex electronic devices rely further upon software for fundamental functionality, the impact of software failure becomes greater. Market forces, however, have made it more difficult to measure software reliability through traditional means. The reuse of previously developed components, the emergence of open source software, and the purchase of developed software has made delivering a reliable final product more difficult. One of the recent software engineering techniques that has emerged with great promise is the static analysis tool. While static analysis does not represent fundamentally new technology, it is only recently that the computing power has increased significantly enough to allow advanced software analysis. While by no means a “Silver Bullet” for perfect software, static analysis has been shown to reduce software defects by a factor of six, as well as detect 60% of post-release failures. Typically applied during software development, especially as a tool for software code review, recent research has indicated a statistically significant relationship between the faults detected with static analysis and the actual field failures detected in a product. This article describes a reliability model which includes limited testing as well as static analysis of the raw source code to estimate the reliability of an existing software module. The reliability is calculated through a Bayesian Belief Network incorporating the path coverage obtained during limited testing, the structure of the source code, and results from multiple static analysis tools combined using a meta tool.

202

Page 225: nasa / ohio space grant consortium 2005-2006 annual student

References [1] G. Tassey, “The economic impacts of inadequate infrastructure for software testing - final report,”

National Institute of Standards and Technology, Tech. Rep. RTI 7007.011, May 2002. [2] M. Lalo and S. Barriault, “Maximizing software reliability and developer’s productivity in

automotive: Run-time errors, misra, and semantic analysis,” Polyspace Technologies, Tech. Rep., 2005.

[3] H. Stewart, “Meetign fda requirements for validation of medical device software,” September 2002, briefing Advertisement. [Online]. Available: http://www/henrystewart.com/conferences/september2002/N02613/

[4] W. Everett, S. Keene, and A. Nikora, “Applying software reliability engineering in the 1990s,” IEEE Transactions on Reliability, vol. 47, no. 3, pp. SP372–SP378, September 1998.

[5] C. Hote, “Run-time error detection through semantic analysis: A breakthrough solution to todays software testing inadequacies in automotive,” Polyspace Technologies, Tech. Rep., September 2001.

[6] M. Sipser, Introduction to the Theory of Computation. 20 Park Plaza, Boston, MA 02116-4324: PWS Publishing Company, 1997.

[7] S. Xiao and C. H. Pham, “Performing high efficiency source code static analysis with intelligent extensions.” in APSEC, 2004, pp. 346–355.

[8] Q. Systems, “Overview large java project code quality analysis,” QA Systems, Tech. Rep., 2002. [9] J. Viega, J. T. Bloch, Y. Kohno, and G. McGraw, “Its4: A static vulnerability scanner for c and c++

code,” in ACSAC ’00: Proceedings of the 16th Annual Computer Security Applications Conference. Washington, DC, USA: IEEE Computer Society, 2000, p. 257.

[10] V. B. Livshits and M. S. Lam, “Finding security vulnerabilities in java applications with static analysis,” in 14th USENIX Security Symposium, 2005.

[11] A. Rai, “On the role of static analysis in operating system checking and runtime verification,” Stony Brook University, Tech. Rep., May 2005, technical Report FSL-05-01.

[12] C. Artho, “Finding faults in multi-threaded programs,” Master’s thesis, Federal Institute of Technology, 2001. [Online]. Available: citeseer.ist.psu.edu/artho01finding.html

[13] K. J. Harrison, “Static code analysis on the c-130j hercules safety-critical software,” Aerosystems International, UK, Tech. Rep., 1999. [Online]. Available: www.damek.kth.se/RTC/SC3S/papers/Harrison.doc.

[14] “Polyspace for c++,” Product Brochure. [Online]. Available: http://www.polyspace-customer-center.com/pdf/cpp.pdf

[15] R. L. Glass, “The realities of software technology payoffs,” Commun. ACM, vol. 42, no. 2, pp. 74–79, 1999. [Online]. Available: http://portal.acm.org/citation.cfm?id=293411.293481#

[16] ——, “Inspections - some surprise findings,” Commun. ACM, vol. 42, no. 4, pp. 17–19, 1999. [Online]. Available: http://portal.acm.org/citation.cfm?id=293411.293481#

[17] D. J. Richardson, “Static analysis,” ICS 224: Software Testing and Analysis Class Notes, Spring 2000. [Online]. Available: http://www.ics.uci.edu/_djr/classes/ics224/lectures/08-StaticAnalysis.pdf

[18] D. Giesen, “Philosophy and practical implementation of static analyzer tools,” QA Systems Technologies, Tech. Rep., 1998.

[19] N. Nagappan, L. Williams, M. Vouk, J. Hudepohl, and W. Snipes, “A preliminary investigation of automated software inspection,” in IEEE International Symposium on Software Reliability Engineering, 2004, pp. 429–439.

[20] S. Krishnamurthy and A. P. Mathur, “On predicting reliability of modules using code coverage,” in CASCON ’96: Proceedings of the 1996 conference of the Centre for Advanced Studies on Collaborative research. IBM Press, 1996, p. 22.

[21] N. Rutar, C. B. Almazan, and J. S. Foster, “A comparison of bug finding tools for java,” in Proceedings of the 15th IEEE Symposium on Software Reliability Engineering. Saint-Malo, France: IEEE Computer Society, November 2004.

[22] S. Lu, Z. Li, F. Qin, L. Tan, P. Zhou, and Y. Zhou, “Bugbench: Benchmarks for evaluating bug detection tools,” in Proceedings of the Workshop on the Evaluation of Software Defect Detection Tools, June 2005.

[23] E. N. Adams, “Optimizing preventive service of software products”, ibm j. research and development,,” IBM J. Research and Development, vol. 28, no. 1, pp. 2–14, January 1984.

203

Page 226: nasa / ohio space grant consortium 2005-2006 annual student

[24] C. Kaner, “Software negligence and testing coverage,” Florida Tech, Tech. Rep., 1995. [25] P. Garg, “Investigating coverage-reliability relationship and sensitivity of reliability to errors in the

operational profile,” in CASCON ’94: Proceedings of the 1994 conference of the Centre for Advanced Studies on Collaborative research. IBM Press, 1994, p. 19.

[26] F. D. Frate, P. Garg, A. P. Mathur, and A. Pasquini, “On the correlation between code coverage and software reliability,” in Proceedings of the Sixth International Symposium on Software Reliability Engineering, 1995, pp. 124–132.

[27] B. Marick, “How to misuse code coverage,” Reliable Software Technologies, Tech. Rep., 1999. [28] J. M. Agustin, “Jblanket: Support for extreme coverage in java unit testing,” University of Hawaii at

Manoa, Tech. Rep. 02-08, 2002. [Online]. Available: citeseer.ifi.unizh.ch/605556.html [29] P. Piwowarski, M. Ohba, and J. Caruso, “Coverage measurement experience during function test,” in

ICSE ’93: Proceedings of the 15th international conference on Software Engineering. Los Alamitos, CA, USA: IEEE Computer Society Press, 1993, pp. 287–301.

[30] Y. Jung, J. Kim, J. Shin, and K. Yi, “Taming false alarms from a domain-unaware c analyzer by a bayesian statistical post analysis.” in SAS, 2005, pp. 203–217.

[31] Y. Jung, J. Kim, J. Sin, and K. Yi, “Soundness by static analysis and false-alarm removal by statistical analysis: Our airac experience,” in Workshop on the Evaluation of Software Defect Detection Tools, June 12 2005.

[32] C. C. Williams and J. K. Hollingsworth, “Using historical information to improve bug finding techniques,” in Workshop on the Evaluation of Software Defect Detection Tools, June 12th 2005.

[33] ——, “Bug driven bug finders,” in International Workshop on Mining Software Repositories (MSR), May 2004.

[34] T. Kremenek and D. Engler, “Z-Ranking: Using statistical analysis to counter the impact of static analysis approximations,” in SAS 2003, 2003. [Online]. Available: citeseer.ist.psu.edu/659316.html

[35] M. Sullivan and R. Chillarege, “A comparison of software defects in database management systems and operating systems,” in in 22nd Int. Symp. on Fault-Tolerant Computing (FTCS-22). IEEE Computer Society Press, 1992, pp. 475–484.

[36] D. Hovemeyer and W. Pugh, “Finding bugs is easy,” in OOPSLA ’04: Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. New York, NY, USA: ACM Press, 2004, pp. 132–136.

[37] C. Artho and K. Havelund, “Applying jlint to space exploration software.” in VMCAI, 2004, pp. 297–308.

[38] S. Wagner, J. Jrjens, C. Koller, and P. Trischberger, “Comparing bug finding tools with reviews and tests,” in Proceedings of Testing of Communicating Systems: 17th IFIP TC6/WG 6.1 International Conference, TestCom 2005. Montreal, Canada: Springer-Verlag GmbH, May - June 2005.

204

Page 227: nasa / ohio space grant consortium 2005-2006 annual student

The Determination of Dust Opacities Using Color Asymmetries in Inclined Galaxies

Student Researcher: Paul H. Sell

Advisor: Dr. Adolf Witt

The University of Toledo Department of Physics and Astronomy

Abstract The presence of interstellar dust obscures physical processes occurring within other galaxies. Astronomers need to know to what degree dust opacity contributes in partially concealing these physical processes. The determination of the dust opacity of galaxies is, therefore, of major importance, if the light from galaxies is to be interpreted correctly in terms of the amount and nature of the processes occurring within them. Studying the reddening of galaxies could lead to a solution to this problem. When a galaxy is viewed in the sky at an inclination between zero and ninety degrees, a color asymmetry occurs between the front half and the back half of the galaxy. We measure the color asymmetries caused by dust attenuation in B-K along the minor axes of the disks of real galaxies and compare their color asymmetries to our model calculations to estimate their optical depths. Project Objectives We want to determine the dust opacity in a sample of inclined galaxies. We will assess whether the existing data in the currently available online databases are adequate. Methodology Used We are utilizing galaxy models consisting of realistic three-dimensional distributions of stars and dust. The transfer of stellar radiation through the dusty medium has been simulated by the numerical Monte Carlo technique in the DIRTY (DustI Radiative Transfer Yeah!) radiative transfer code, which allows for a full accounting of absorption and scattering to take place (Gordon et al. 2001). We used an IDL-based program created by a former University of Toledo student and current collaborator, Gregory Madsen, titled Dusty View to display images derived from calculations made with DIRTY. The Dusty View user-friendly console allows us to choose the inclination angle, dust opacity, the distribution of dust, and the wavelengths that we would like to output from the calculations made with DIRTY. Thus, we are able to simulate the complete range of disk galaxies viewed in the sky. We have extracted a total of 36 sets of surface brightness measurements from our models assuming a clumpy distribution of dust with inclinations of 50, 65, 70, 75, 80, and 85 degrees (the color asymmetry is not measureable for galaxies inclined less than 50 degrees) and dust opacity values of 0.25, 0.50, 1, 2, 4, and 8. Because pixel-to-pixel variations can cause an undesirable scatter in the surface brightness measurements along the minor axis of a modeled galaxy where we expect to measure color asymmetries, we have chosen to average the pixels in a line of points that stretches the length of the minor axis. We extracted our surface brightness measurements from the galaxy models by utilizing a slit 25 pixels wide and a length that depended on the inclination of the galaxy, where more highly inclined galaxies resulted in fewer surface brightness measurements. Dusty View facilitated the calculations that needed to be made by directly outputting the color in B-K for us and the corresponding standard deviations of each averaged row of pixels in absolute magnitudes. Even though each measurement consisted of a row of 25 averaged pixels, there was still too much scatter in the points. Therefore, we proceeded to average the pixel values further by making a box 5 rows high. We then moved the box of a total area of 125 pixels down the minor axis row by row. This averaging worked to our advantage as it not only decreased the noise immensely, but it also did not decrease the amount of measurements along the minor axis significantly. Then, using the DSS2 (Digital Sky Survey 2) and the 2MASS (Two-Micron All-Sky Survey) survey data, we measured the surface brightnesses along the minor axes of real spiral galaxies using boxes of varying sizes. The size of the box depended on the size of the galaxy in the sky and the number of measurements

205

Page 228: nasa / ohio space grant consortium 2005-2006 annual student

depended on the inclination or length of the minor axis. We measured the full length of the minor axis to a radius that corresponds to approximately 10% above the level of the sky in B. We have plotted those measurements for comparison to the model measurements in Figures 1-4. Results Our results are evidenced in the surface brightness measurements that were taken from the all sky surveys and the model. The inconsistencies between the model measurements and the real galaxy measurements of NGC253 are a result of the low depth of the exposures in the 2MASS all-sky survey and a number of complex factors inherent in real galaxies. First, because 2MASS has to image the entire sky, there cannot be more than one exposure for each of our galaxies. This especially results in a low signal to noise ratio in K, as there is a significant amount of background sky emission in K, implying large error bars, as is evident in Figures 2 and 4. In addition, the models did not account for the wavelength dependence of the scale length. We see this in the much smaller reddening effect in the B-K color in the models as compared to the real galaxies. Lastly, as a result of the large error bars in the plots as commented on above, it is clear that imaging these galaxies one by one with a higher signal to noise ratio in K would be necessary for a more in-depth study of the effect of dust attenuation in inclined disk galaxies. The existing sky survey data are inadequate to estimate the dust opacties of our set of galaxies with a high confidence level. Figures and Images

Figures 1 and 2 (top). These two plots show the color in B-K plotted as a function of the position along the minor axis of the galaxy. Figures 3 and 4 (bottom). These two plots show the difference in the color of the front half of the galaxy minus the back half of the galaxy plotted as a function of radius beginning in the center of the galaxy and moving outward. References Gordon, K.D., Misselt, K.A., Witt, A.N., & Clayton, G.C., 2001, 551, 269.

206

Page 229: nasa / ohio space grant consortium 2005-2006 annual student

Engine and Generator Efficiency Analysis

Student Researcher: Bradley J. Sheldon

Advisor: Dr. Richard Gross

The University of Akron Mechanical Engineering Department

Abstract This research is to analysis the engine and generator efficiency for the Challenge X Team at The University of Akron. The Challenge X Competition is a three year competition to re-engineer a GM Equinox to minimize energy consumption, emissions, and greenhouse gases while maintaining or exceeding the vehicle's utility and performance. The engine is a 1.9 TDI diesel engine from a 2005 Beatle Volkswagen paired with a 2005 Volkswagen DSG 6-speed Automatic Transmission. The generator is a Siemens ACW-80-4 PMSM Generator. These components will be combined with a Ballard drive motor for the rear wheels. Using dynamometer test data, engine operating points will be established. The appropriate equations and setting will recognized to convert into MATLAB to be used in the control strategy. This control strategy will control when the engine, generator, and motor will be running. These points and setting will be dictated by the speed of the vehicle, the power requested by the driver, and the limitations of the components. Research is still being done to refine these points. Basic points were found for this year’s competition at the end of May, 2006. Project Objectives There has been a greater demand to reduce fuel consumption and vehicle emissions, while maintaining the safety, performance, utility, and consumer appeal of a vehicle. This project develops and explores advanced vehicle technologies that address important energy and environmental issues. The more efficient the engine and generator are together, the better the fuel economy and the lower the emissions will be for the GM Equinox. This research focuses on the engines most proficient operating range when ran alone and also coupled with a generator. It also concentrates on when to use which component in the series-parallel 2x2 architecture. The most suitable usage of the power provided by the diesel engine, generator, electric drive motor, and the state of charge of the battery pack and the ultracapacitor bank over the estimated drive cycle seen in Chart 1. The appropriate operating points will be determined for the speed and needed power of the vehicle (with special emphasis on competition tests and requirements). These results will be integrated into a control strategy to be used in the second and third years of competition. Chart 1. Anticipated Velocity (mph) verse Time (sec) graph for the competition.

207

Page 230: nasa / ohio space grant consortium 2005-2006 annual student

Methodology Used To begin looking at the method used, the basic concept behind The University of Akron’s Equinox has to be understood. The team has chosen a series-parallel 2x2 architecture which can be seen in Figure 1.

This powertrain combines a 75 kW compression-ignition engine using B20 biodiesel fuel to run the front wheels with a Ballard electric motor capable of 65 kW peak mechanical power output to run the rear tires. The electrical system consists of a DC bus supplied by a 20 kW generator with a nickel-metal-hydride battery pack and an ultracapacitor bank for energy storage. This specified hybrid powertrain should give the Equinox the ability to meet or exceed the competition goals.

This powertrain architecture allows several different modes of hybrid electric vehicle (HEV) operation to choose from. 1) With the mechanical transmission in neutral and the diesel engine turned off, the HEV can operate as a purely electric vehicle. The rear motor provides traction, powered by the ultracapacitor and the battery bank. 2) With the mechanical transmission in neutral and the diesel engine on, the HEV can operate in a series mode. The rear motor is moving the vehicle while being powered by the batteries and ultracapacitors. The generator is also producing power to be stored and used. 3) With the mechanical transmission engaged, the diesel engine can be delivering power both to the electrical path and to the mechanical path to be used by the front wheels. This power split will be controlled by the generator loading levels (0-20 kW) and be determined to optimize the efficiency of the system (see “Results Obtained”). 4) The HEV can also have the diesel engine providing maximum power to the front wheels (i.e. the generator is not engaged) and the electric motor drawing current from the batteries and the ultracapacitors to provide maximum power at the rear wheels. This should produce a peak performance. This Equinox can be run using these four different operating modes. Using a dynamometer test, many data points where taken. The dyno was ran with the engine by itself, coupled with the generator with no load, coupled with the generator at different loading levels, and coupled with the generator when fully loaded. Target engine speed and power were aimed at. The actual speed, torque, and power were recorded. The fuel flow rate was also documented. The efficiency for all the data points was calculated and all data lower than 35% was put aside. The 35% and higher was kept to be studied further. Some data examples can be seen below in Table 1. The operation ranges were examined with respect to the selected gear ratios of the transmission.

Table 1. Example of some of the data taken when the diesel engine had no generator load.

Target Speed

Target Power

Average Actual Speed

Average Actual Torque

Average Actual Power

Fuel Flow Rate

Ideal Power Efficiency

RPM HP RPM FT-LBS HP l/hr HP % 1500 10 1538.1 35.1 10.3 2 26.63 38.6% 1500 30 1492.5 100 28.4 6 79.88 35.6% 1750 20 1741.7 61.4 20.4 4 53.25 38.2% 1750 30 1740.5 88.3 29.3 5.8 77.21 37.9% 1750 40 1751 117 39.0 7.4 98.52 39.6% 2000 30 1986.0 77.2 29.2 5.9 78.55 37.2% 2000 40 2024.7 100.5 38.7 8.0 106.50 36.4% 2250 30 2274.2 72.3 31.3 6.2 82.54 37.9%

208

Page 231: nasa / ohio space grant consortium 2005-2006 annual student

Results Obtained Actual Power vs. Vehicle Speed

(Power Trendlines)

y = 0.2467x1.1836

y = 0.3089x1.1836y = 0.4196x1.1836

y = 0.6468x1.1836y = 1.1089x1.1836y = 1.7671x1.1836

y = 2.0618x1.1836

0

10

20

30

40

50

60

70

80

90

0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00

Front Wheel Linear Vehicle Speed (Mi/Hr)

Act

ual P

ower

(HP)

o

First GearSecond GearThird GearFourth GearFifth GearSixth GearReversePower (Sixth Gear)P (Fifth G )

Chart 2. Shows the actual power delivered by the engine vs. the vehicle speed of the Equinox.

35% or Higher Actual Power vs. Vehicle Speed

0

10

20

30

40

50

60

70

80

90

0.00 20.00 40.00 60.00 80.00 100.00 120.00

Front Wheel Linear Vehicle Speed (Mi/Hr)

Act

ual P

ower

(HP)

6th Gear5th Gear4th Gear

3rd Gear2nd Gear

Reverse

1st Gear

Chart 3. Shows the envelopes that are used to determine when to switch gears.

Using the Charts 2 and 3 located above, the ideal locations to keep the engine having the best efficiency can be located by knowing when to change gears. These are the engine operating maps with respect to its selected gear ratios. By determining the equations that make up the sides of the gear envelopes, fuel flow rate can be kept to a minimum and the diesel engine will be most effective. Chart 4 located on the next page shows an example of the equation that best fits sixth gear. Each gear has equations just like this example to be used in the control strategy. Chart 5, on the next page, shows an example of the equations to be used in-between selected gears. They will help make the car more efficient by giving the controller an operating range to keep fuel flow rate at a minimum yet still get the most power available.

209

Page 232: nasa / ohio space grant consortium 2005-2006 annual student

Equations for Sixth Gears (Top & Bottom)

y = 0.0242x1.6498

y = -0.0017x2 + 0.9463x - 8.2451

0102030405060708090

0 50 100 150Front Wheel Linear Vehicle Speed (mi/hr)

Act

ual P

ower

(Hp)

oSixth GearBottomSixth Gear Top

Power (SixthGear Bottom)Poly. (SixthGear Top)

Equations for Envelope Between Sixth and Fifth Gear

y = 0.0072x2 - 0.2298x + 9.278

y = -0.0088x2 + 2.293x - 58.893

0

10

20

30

40

50

60

70

80

90

100

0 20 40 60 80 100 120 140

Front Wheel Linear Vehicle Speed (mi/hr)

Actu

al P

ower

(h

Sixth topFifth BottomBottom EnvelopeTop EnvelopePoly. (Bottom Envelope)Poly. (Top Envelope)

Chart 4. An example of equations to be used in the control strategy.

Chart 5. Shows an example of the equations to be used in-between the gears.

Significance and Interpretation of Results Knowing when the engine is most efficient helps determine when to utilize different modes of the powertrain architecture. In reference to the different modes respectively described in “Methodology Used,” operation points can be recognized. 1) This mode is very quiet and is suitable for low speeds and short time intervals. All the stored energy can be depleted during this operation before having to switch to a different mode. 2) This mode is suitable for start and stop operation with moderate acceleration and deceleration. It also enables the engine to run at its most efficient torque/speed for the average power required to load the generator at the wanted load rate. 3) This mode allows for higher power operation. It can optimize the entire system by using the most efficient generator load. 4) This is for peak performance. It is most useful during short intervals of high acceleration or uphill towing. Acknowledgments The author would like to thank Challenge X Team members for all the help and support in collecting data along with all the sponsors to The University of Akron Team and the Challenge X Competition in general. The author would like to also thank his advisors Dr. Gross and Dr. Lam from The University of Akron, and everyone from the Ohio Space Grant Consortium. References 1. < http://www.challengex.org/default.htm> 2. < http://www.challengex.uakron.edu/>

210

Page 233: nasa / ohio space grant consortium 2005-2006 annual student

The Role of p38 in Bone Modeling

Student Researcher: Bethany G. Sibbitt

Advisor: Dr. Alicia Schaffner

Cedarville University Science and Mathematics Department

Abstract Skeletal tissue originates as mesenchymal cells derived from mesoderm. Condensation of these cells occurs around the fourth week of fetal development to for rudimentary limb buds. The actual bone forms through two types of ossification – intramembranous and endochondral. The first mechanism in which bones form directly in the mesenchyme occurs mainly in flat bones. The latter mechanism is responsible for the majority of bone formation. Endochondral ossificiation involves the replacement of hyaline cartilage by bone tissue. To fully understand bone tissue homeostasis, it is crucial to discuss bone histology. There are four types of cells present within bone tissue – osteogenic cells, osteoblasts, osteoclasts, and osteocytes. Osteogenic cells are undifferentiated cells arising from mesenchyme. Mature osteogenic cells give rise to osteoblasts. These cells are responsible for secreting components necessary for the bone matrix and initiation of calcification. Osteoclasts, on the other hand, are responsible for breaking down the matrix. These cells arise from the fusion of multiple monocytes. Osteocytes are responsible for daily bone metabolism. Osteoblasts and osteoclasts work together in tightly regulated bone modeling. This involves bone deposition – addition of mineral and collagen fibers to bone by osteoblasts – and bone reabsorption – removal by osteoclasts. It is evident that tight regulation of both types of cells is necessary to guard against various types of bone disease. Overproduction of osteoclasts results in diseases like osteoporosis, while overproduction of osteoblasts can result in osteopetrosis. One method of regulation lies in the signaling pathways for osteoblasts and osteoclasts. Much has been studied on factors affecting osteoclast formation and differentiation. These factors include members of the tumor necrosis factor (TNF) family. Signaling of these pathways results in gene transcription. The RANK ligand (RANKL) belongs to the TNF family and is manufactured by osteoblasts. Its receptor, RANK, resides on committed osteoclast cells. RANKL is responsible for differentiation of osteoclast precursors. The RANK ligand (RANKL) belongs to the TNF family and is manufactured by osteoblasts. Its receptor, RANK, resides on committed osteoclast cells. RANKL is responsible for differentiation of osteoclast precursors. The binding of RANKL to RANK signals a TRAF6 adapter protein. This induces the activation of NF-kB and JNK in the osteoclast precursors. Osteoprotegerin (OPG) acts as a competitor for RANKL. Therefore, the RANKL-RANK-OPG complex is important for osteoclastogenesis. OPG can directly phosphorylate p38 and induce a p38 MAPK signaling cascade; however, this phosphorylation occurs strictly in precursor cells. Once p38 is activated, it activates microphthalmia transcription factor (MITF), which is responsible for terminal osteoclast differentiation. Phosphorylation of MITF subsequently activates target genes in osteoclasts. The RANKL/RANK pathway can also act independently from OPG. In this case, RANKL induces a complex containing TRAF6, TAK1, and TAB2. TAB2 is responsible for activating p38. Another important modulator for osteoclast differentiation is TNF-α, another member of the TNF family. It has two receptors: TNFR1 positively regulates osteoclast formation and function while TNFR2 acts as an inhibitor. TNF-α can activate a variety of signaling pathways including the p38 MAPK. It is believed that TNFR1 is the predominant modulator for intracellular signaling. TNFR1 acts as a primary promoter of osteoclastogenesis and activates several MAPK pathways. The TNFR1-mediated p38 activation possibly utilizes receptor interacting protein (RIP) and MAPK kinase 3. New evidence suggests that TRAIL, another TNF member, also moderates osteoclast formation and function. TRAIL is known for inducing apotosis upon interaction with its receptors. Studies show that

211

Page 234: nasa / ohio space grant consortium 2005-2006 annual student

TRAIL inhibited osteoclastogenesis in vitro. TRAIL works by blocking the p38 pathway induced by RANKL. Objective The objective of this project is to isolate specific proteins that effect the p38 signaling pathway thereby altering gene expression. It is important to determine which proteins may interfere with these signaling pathways. This could lead to possible treatment or prevention of many common bone diseases. In order to determine how the p38 kinase cascade plays a role in osteoclast differentiation we have set out to determine what additional targets there are for p38. This will help to further our understanding of possible causes of some of the common bone growth diseases. I will be conducting a yeast two-hybrid assay with p38 in order to determine if specific proteins bind and what affect they have on p38 and subsequent gene expression. Methodology The yeast two-hybrid is designed to utilize reporter genes and two domains on the GAL4 protein – the activation domain and the binding domain. Usually the activation and binding domains are linked on GAL4. In the yeast two-hybrid, the binding domain was fused into the p38 plasmid, and the activation domain was fused to a cDNA library. This library contains thousands of clones of bone marrow proteins. Since the domains have been spatially separated, the only way that they can interact and bind is if the specific protein interacts with p38 itself. The two hybrid consists of mating the yeast strain (Y87) transformed with the bait – p38 binding domain – with a second yeast strain (AH109), transformed with the cDNA library. To begin my experiment, I transformed competent E. coli cells with the p38 plasmid. The plasmid was added to tubes containing the cells and then denatured through an ice incubation and subsequent heat shock. After the E. coli were successfully transformed, I cultured one of the colonies to provide enough DNA to harvest and purify. At this point, I would have initiated the yeast two-hybrid; however, there was various problems receiving the yeast colonies from the manufacturer. Initially I intended to transform the yeast myself. In light of the delay, Clonetech transformed the yeast themselves. Once I received the yeast strains, I made the media and plates necessary for growth. The media included the antibiotic kanamycin and a tryptophan amino acid drop out supplement. The agar plates were divided into four groups: -leucine, -tryptophan, -leucine/-tryptophan, and –leuceine/-tryptophan/-adenine/-histidine. The yeast requires certain amino acids to remain viable. If the transfected plasmid provides the necessary amino acid, the yeast will be able to grow in an environment lacking that nutrient. The –leu plates are used to isolated transformed bait. The –tryptophan plates are used to isolate the transformed library. The –leu/-trp merely show that the bait and library have interacted. The –leu/-trp/-his/-ade plates coincide with the reporter genes and will indicate that the library is bound to the bait. After receiving the yeast, I needed top grow the bait from a single colony. The yeast containing the plasmid (pGBKT7) only was also grown as a control. This process involved isolating one large colonly and inoculating it in the specific media. After inoculation, the yeast was incubated overnight. Results To date, I have not been able to obtain any data due to several problems. The transformed colonies I received from Clonetech were not large enough to use directly in the growth step. To combat this problem, I attempted to regrow some of the transformed colonies onto plates I had made. These colonies still did not grow to the correct size. After repeating that process twice, I went ahead with the procedure and used several small colonies instead of one large one. The yeast did not grow. I followed the same procedure with the untransformed yeast as a control to see if the entire strain was defective. The plain yeast did grow in the media. I compared the yeast containing the bait to the yeast with only the plasmid to see if the bait was toxic to the yeast. Neither of the transformed yeast grew, eliminating the possibility that the bait itself was toxic. My last attempt to combat the problem was to remake all of the media under extremely sterile conditions. All steps were taken to ensure the control environment was identical to the variable environment. This still has not yielded any results. I will not be able to mate the yeast until I can determine what is preventing their growth.

212

Page 235: nasa / ohio space grant consortium 2005-2006 annual student

Nanotechnology: The Impact on Business and Society

Student Researcher: Japheth Thomas Siwo

Advisor: Mr. Herbert Stewart

Wilberforce University Engineering and Computer Science Department

Abstract Nanotechnology is an emerging discovery that is becoming more popular in many sectors of society. This science of building electronic circuits and devices from single atoms and molecules is drastically reducing the size of particles that were much larger in previous days. As smaller is continuously perceived as better, nanotechnology is believed to make a large impact on many environments. The major fields include biotechnology, manufacturing, computer storage, and energy. This research project examines the impact that nanotechnology will have in the immediate future as well as long-term. It will begin with a brief introduction of this scientific application discussing the origination of the idea of an engineering based technology. Next, moving on to the current situation in today’s economy with the different industries and transitions that are being considered. Evaluations will be made based upon the key advantages such as performance, reliability, safety, and cost. Also, the disadvantages and some of the important issues to keep in mind such as privacy and security of organizations and individual consumers. Projections of corporate growth due to this technological innovation will be presented. From the discoveries, recommendations will be made pertaining to the direction of nanotechnology. Project Objectives The objective of this project is to look in depth into the process of nanotechnology and the impact that it has already made and what we may look forward to in the future. It will inform us of the transition that has taken place to arrive at the conclusions we currently have reached. Nanotechnology was first introduced on a small scale in 1959 at an American Physical Society meeting by physicist Richard Feynman. The idea of nanotechnology became much more popular during the 1980s. Nanotechnology is basically the science of developing materials that are typically less than one hundred nanometers in size, which is comparable to 1/80,000 of the diameter of a human hair. Nanotechnology covers a broad range of industries, but the focus of the project will examine the automotive appearance products. Methodology Due to the limitations of being able to test nanotechnology, I was restricted as to the approach to take and the actual resources available to me. However, I was able to compare the latest car appearance products that have been introduced by Eagle One, which is a major supplier of these products. Eagle One has gained a competitive edge over its competitors by applying nanotechnology to its products. The Eagle One product that I tested is the new NanoWax product compared to the traditional TurtleWax product that I have used in the past. The major areas that I tested were resistivity, longevity, and appearance. The experiment began by using two vehicles that were the exact same color and applying NanoWax to one vehicle and traditional wax to the other. These vehicles were both exposed to several weather conditions that had an impact on them. The process was able to return some differences in the nanotechnology based wax versus the traditional wax. Results Obtained The nanoparticles being much smaller provided better surface penetration. This allowed for better protection during different weather conditions such as heavy rain. It could be clearly seen that the NanoWax had a much brighter shine. Another advantage of using the NanoWax was that it was very easy to apply onto the vehicle. It did not leave any residue or swirl marks, which were displayed using the traditional wax. Longevity of this product could not be displayed due to the short amount of time that the product had been applied. The results of this experiment are just the beginning of the significance of nanotechnology, and the impact that it will have on the different industries of business.

213

Page 236: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results Nanotechnology has attracted the attention of many researchers and investors due to the great potential it possesses. Once the small problems are solved, which are more evident in the health care industry, nanotechnology could be the wave of the future for growth in many areas. It will inexpensively allow for materials to be used in building products due to smaller molecules requiring fewer materials. Based on the results from the experiment with the NanoWax, it is very well evident that nanotechnology will have a great impact on businesses and society as a whole. The major categories that will be affected in the immediate future include the following:

• Information Technology: smaller computer chips that store trillions of bits of information • Materials: high strength, chemical sensing, protective suits • Medical: drug and gene delivery improvements, detect diseases with nanoscale sensors • Energy: solar panel enhancements • Environment: water purification systems and pollution control systems. • Aerospace: nanotube cables for space travel

Figures/Charts Figure 1 shown below is a comparison of the conventional size particles that have been used in the past by Eagle One for their car products. There is a tremendous reduction in the size of the particles that are now used, therefore creating better results.

Figure 1.

Figure 2 below shows the space that is missed using traditional wax due to the large particles in the wax. On the other hand NanoWax covers these areas since the particles are drastically smaller enabling the wax to reach these spots.

Figure 2. References 1. Drexler, Eric and Peterson, Chris. “Unbounding the Future: The Nanotechnology Revolution.”

William Morrow CO., 1991. 2. Markoff, John. “Computer Scientists are Poised for Revolution on a Tiny Scale.” New York Times,

1 November 1999. 3. “Nanotechnology” 1 January 2006. <http://www.eagleone.com> 4. Sullivan, Bob. “Nanotechnology? Make it So!” NBC News, January 1999. 5. “The Science of Nanotechnology.” 26 March 2006. <http://www.firstscience.com>

214

Page 237: nasa / ohio space grant consortium 2005-2006 annual student

Obstacle Detection and Avoidance Methods Implemented via LiDAR for Synthetic Vision Navigation Systems

Student Researcher: Mark A. Smearcheck

Advisor: Dr. Maarten Uijt de Haag

Ohio University

School of Electrical Engineering and Computer Engineering

Abstract Terrain based Synthetic Vision Systems (SVS) incorporate Airborne Laser Scanning (ALS), inertial navigation, and Digital Terrain Models (DTM) to display flight critical navigation information to pilots on a heads up display. Greater situational awareness and flight path information coupled with assistance in poor visibility conditions are obtained by correlating existing DTM data with ALS data acquired in real time though a method of laser scanning known as Light Detection and Ranging (LiDAR). This information is then used to correct for the random walk drift errors associated with the acceleration, attitude, and magnetic field information output by inertial navigation systems (INS). These experimental systems are extremely useful in low altitude flight, however when navigating near the ground, obstacles such as cell towers, power lines, radio towers, traffic on the runway, and even buildings will pose a threat to aircraft safety. To make pilots aware of stationary obstacles, pre-programmed obstacle databases can be used to extract location information that can be viewed on a Synthetic Vision Display. Automatic Dependant Surveillance – Broadcast (ADS-B) may even be used to track ground vehicles equipped with this technology. However, each of these methods does not always prove to be reliable solutions since they depend on information that is obtained external to the aircraft. To provide maximum integrity for any SVS, an internal obstacle detection method such as forward ALS is required. Objectives The Ohio University Avionics Engineering Center has purchased a LiDAR known as the Reigl LMS-Q280i ALS that has been used in the design of this system. The primary goals of this experiment are to write software capable of managing the LMS-Q280i, calculate the optimal operating parameters, integrate the LiDAR into the current SVS developed by the Ohio University Avionics Engineering Center, collect and analyze data, and to determine the feasibility and operational complexity of the system. After the software is complete and the system is fully functional ground tests will be preformed to gather data that will be used in analysis. Methodology The Reigl ALS used in this experiment consists of a 4-facet rotating polygon mirror scanning mechanism capable of linear scanning at a range of 1500 meters. The laser has a maximum scan angle range of 45° or 60° at 90% of the measurement range. Laser scan speed can be varied between 5 lines per second to 80 lines per second with a measurement accuracy of ±20mm. A near infrared wavelength is emitted and is rated as a class one eye safe laser. The ALS has the ability to return range, angle, and intensity measurements that can be transformed to determine relative position. The first step was to determine the operating parameters of the laser including pulse repetition frequency (PRF), angle between scans (Delta), and starting position. After the mathematical analysis is complete the hardware design will be addressed. The Reigl LMS-Q280i possesses serial, parallel, and Ethernet interface capabilities. An appropriate method of communication must be determined to allow for the programming of the ALS. In order to integrate the object detection LiDAR into the SVS platform, software must be written for the Real Time Operating System (RTOS) QNX 6.0 that requires system processes to be completed within a specific time frame. This RTOS method of hardware control via software is known as a resource manager. Upon completion and thorough testing of the integrated software and SVS system, ground tests must be preformed. These tests will consist of a correlation of truth data and LiDAR test data. A method must be devised to allow specific obstacles to be correctly identified. Data will then be analyzed to determine the

215

Page 238: nasa / ohio space grant consortium 2005-2006 annual student

efficiency and accuracy of the LiDAR based obstacle detection. Preliminary conclusions on feasibility and reliability of such a system will then be formed based on these results. Results The first experimental set is to determine the operating parameters of the laser. These parameters include the pulse repetition frequency (PRF), angle between scans (Delta), and starting position. The equations used to derive these parameters are given as:

PRF = Frequency * ( Scan Width / Mirror Rate )

Delta = Mirror Rate / Frequency

Start Position = 90 – ( Scan Width / 2 ) The system communication was implemented via Ethernet on a single subnet. The LiDAR runs telnet and FTP services and also allows for TCP/IP sockets to be created on specified ports. This gives the data collection computer running QNX 6.0 the ability to connect to the LiDAR to both send configuration commands and to receive data. The resource management and system integration was programmed in C. The resource manager makes it possible to simultaneously read data from and write to all devices in the SVS. It is also responsible for calculating and parsing the data. A model of the communication system and resource management is show in Figure 1.

Laser Scanner Programming Mode

Measurement Mode

ParameterCalculation

InitializationSequence

RangeAmplitude

AngleTime

Color (RGB)

LMSQ280i

LaserScanner

TCP/ IPInterface

Data Parsing&

StorageMeasurement

Processing

SOFTWARE PLATFORM

EthernetDatalink

Figure 1. System infrastructure and communication. In order to begin preliminary testing of the system the ALS was placed on the roof of Stocker Center, located in Athens Ohio. Data was gathered by scanning four 23.5° segments of the sky line 40 times each. The 40 scans per segment were first checked for consistency before they were used in data correlation. The 23.5° sky segments were then combined to form 94° view of the skyline. The test setup is shown in Figure 2 and the data obtained in the experiment is displayed in Figure 3.

Figure 3. Test setup.

216

Page 239: nasa / ohio space grant consortium 2005-2006 annual student

40 60 80 100 120 140 160 180 200 220 24040

60

80

100

120

140

160

180

200

Distance from Origin, meters

Dist

ance

from

Orig

in, m

eter

s

Convocation Center

Tree &Tower

Tree

Gabled Roof ATower &Trees

Gabled Roof B

Gabled Roof AGabled Roof B

Tower &Trees

Legend

. 20 scans/sec.

. 40 scans/sec.

. 60 scans/sec.

Figure 4.

When compared to a photograph of the sky the true accuracy and functionality of the system is demonstrated. This data has also been used in a LiDAR simulation program developed by researches at Ohio University. The simulator demonstrates the measurement locations and density during a landing approach. A LiDAR based obstacle detection system integrated into an SVS has proven to be a feasible concept. The amount of data that must be processed in this situation is large and will require a significant amount of computing power. The cost of the system is also high; however, as technology advances and computing power increases, the cost will decrease. There is still much work to be done on obstacle detection via LiDAR; however, with enough research and ingenuity the system could be used to provide integrity to many flight systems. Acknowledgments and References I would like to thank Ananth Vadlamani, my partner in this research, along with our advisor, Dr. Maarten Uijt de Haag. I would also like to thank the Ohio University Avionics Engineering Center for providing the laboratory and office space, funding the purchase of equipment, and proving an environment in which to perform testing and analysis of the system. Finally, I would like to thank the Ohio Space Grant Consortium for making this project possible. References 1. Ananth Vadlamani, Mark Smearcheck, Maarten Uijt de Haag. “Preliminary Design and Analysis of

a LiDAR Based Obstacle Detection System.” Proceedings, DASC Conference 2005. 2. Myron Kayton, Walter Fried. Avionics Navigation Systems. John Wiley and Sons. 1997. 3. RIEGL Laser Measurement Systems, 2004, “LMS-Q280i Airborne Laser Scanner – Technical

Documentation and Users Instructions”, Austria.

217

Page 240: nasa / ohio space grant consortium 2005-2006 annual student

Mixing Control in High Speed Jets Using Plasma Actuators

Student Researcher: Robert M. Snyder

Advisor: Dr. Mohammad Samimy

The Ohio State University Mechanical Engineering

Abstract The flow through the exhaust nozzle of a jet engine has been of crucial importance in aerospace applications over the past several decades. Modifications can be made to the nozzles of high-speed jet engines to increase or decrease mixing between the exiting flow and the ambient air. Two main methods exist to produce the desired results; streamwise vorticity generation and the manipulation of jet instabilities. Related work in this field has focused primarily on passive control by geometrical modifications to the nozzle exit including tabs, chevrons, and other trailing edge modifications. All are able to produce streamwise vortices, but lack frequency control, thereby making passive control the only viable option. Localized arc filament plasma actuators (LAFPA) developed in our lab at The Ohio State University have both high amplitude and bandwidth and are suitable for active control of high-speed, high Reynolds number flows (Samimy et. al. 2004 & 2006). These actuators will be implemented on a rectangular nozzle in order to efficiently enhance the mixing between the jet and the ambient air. By increasing the mixing, larger amounts of cool ambient air can be entrained into the hot jet thus reducing its heat signature. As a result, the ability to thermally track such jets will be decreased, thus giving this research significant application within developing stealth technologies. Project Objectives The focus of this research is on the development of a method of active control for the mixing layer following a rectangular nozzle. Such nozzles can primarily be found on stealth jets whose wide and thin geometry requires this type of nozzle. The developing technology of plasma actuators will be used in order to achieve active control of the jet. These actuators allow both frequency and phase control, thus providing several design variables, in addition to low power consumption. For simplicity and to operate under laboratory power constraints, a nozzle extension was developed with four actuators equally spaced along the top and bottom at the trailing edge of a rectangular nozzle. The primary factors that affect the development of coherent structures within the mixing layer and the manipulation of jet instabilities in the flow are the frequency, duty cycle, and phase of the actuators (Samimy et. al. 2004). These factors will be adjusted throughout future testing in order to determine the ideal operating conditions for maximum mixing enhancement. Methodology Used Throughout testing and experimentation of the rectangular nozzle, both the behavior of the plasma actuators and the flow will be observed. Actuators are formed by placing two electrodes in close proximity and sending a high voltage across them, which then ionizes the air between them producing an arc filament discharge. The current and voltage through the actuators will be monitored and recorded using a Tektronix P6015A high voltage probe and a Tektronix AM503S wide frequency range current probe. These devices provide a time history of both variables. As previously mentioned, the frequency, duty cycle, and phase of the actuators will be varied, and the resultant flow properties will be observed. Frequency is related to Strouhal number through the following equation:

*

j

f hStU

=

218

Page 241: nasa / ohio space grant consortium 2005-2006 annual student

where f is the frequency of actuator firing in Hertz, h is nozzle height in meters, and Uj is jet velocity in meters per second. By using Strouhal number as opposed to frequency, the results obtained can be scaled to those derived in other labs. Duty cycle refers to the ratio of time the actuator is “on” to the period of actuation. A lower duty cycle results in less power consumption but can lead to misfiring if not properly monitored. Lastly, phase refers to the sequence of actuator firing. Two main modes can be excited in a rectangular jet; a symmetric mode, where all actuators fire at once, and a flapping mode where a 180° phase shift is present between the upper and lower actuators. The primary method used for comparing forced cases will be planar flow visualization of laser light reflected by condensed particles within the jet’s mixing layer. The presence of condensation is a direct result of the degree of mixing between the jet and the ambient air. A laser sheet will be formed using a commercial 10 Hz pulsed Nd:YAG laser operating at a wavelength of 532 nm (green laser light). This sheet will then be oriented through sheet forming optics and directed to illuminate streamwise and crosstream sections of the jet’s mixing layer. The images formed by the laser sheet will then be captured using a Princeton Instruments ICCD camera. Flow visualization allows quick qualitative analysis of the amount of mixing and control authority by visual comparison between forcing cases. A variety of Mach numbers and actuator excitations can be produced through the provided equipment to find the optimum levels of operation. In addition to flow visualizations, there will be several other techniques utilized to record flow information. Most importantly, particle image velocimetry (PIV) will be used to provide the mean and fluctuating velocity of the flow field. With these analytical tools, an accurate assessment of the relationship between forcing and mixing will be produced, revealing the optimum operating conditions for maximum mixing enhancement. Results At this stage in the research, much of the preliminary design work has been successfully completed. A rectangular nozzle extension has been designed and machined for the bulk of future experimentation. The extension is attached to a converging-diverging nozzle with a design Mach number of 1.3. This extension was developed to support eight actuators and has been tested to ensure all actuators are functioning over the entire range of required frequencies. In addition to this initial testing, one of the actuators was replaced with a static pressure tap to determine the ideal stagnation pressure for the nozzle to operate at its design Mach number, thus reducing shock waves that could skew experimental results. This pressure was found to be 24.4 psig. Once this stagnation pressure was experimentally determined, a theoretical preferred frequency of roughly 8830 Hz was determined using an assumed Strouhal number of 0.25 based on previous experimentation (Olsen et. al. 2003). This theoretical frequency serves as a starting point, around which frequency values can be tested, rather than running unnecessary tests over a wide range of frequencies. Prior experiments have also indicated that having each set of actuators (top and bottom) 180° out of phase will provide for maximum mixing with all other factors held constant. Further testing will be conducted to verify the required forcing frequency for maximum mixing as well as optimum duty cycle and phase. References 1. Olsen, J. F., Rajagopalan, S., Antonia, R. A., “Jet column modes in both a plane jet and a passively

modified plane jet subject to acoustic excitation,” Experiments in Fluids, Vol. 35, pp.278-287, 2003. 2. Samimy, M., Adamovich, I., Kim, J.-H., Webb, B., Keshav, S., Utkin, Y., “Active Control of High

Speed Jets Using Localized Arc Filament Plasma Actuators,” AIAA Paper 2004-2130, June-July 2004.

3. Samimy, M., Kim, J.-H., Adamovich, I., Utikin, Y. and Kastner, J., (2006) “Active Control of High Speed and High Reynolds Number Free Jets Using Plasma Actuators,” AIAA Paper No. 2006-0711.

219

Page 242: nasa / ohio space grant consortium 2005-2006 annual student

High Albedo Concrete Pavements for Sustainable Design

Student Researcher: Justin A. Stiles

Advisor: Dr. Farhad Reza, P.E.

Ohio Northern University Department of Civil Engineering

Abstract Concrete is a building material that is widely used for roadway pavements and absorbs more heat from sunlight compared to other types of surfaces. This material has a low solar reflectivity, or albedo, which is the ratio of the amount of light reflected from a material to the amount of light incident on the material. Albedo is measured on a scale from 0 to 1. An albedo value of 0 indicates a black body cavity that does not reflect any light, while an albedo of 1 indicates a perfectly reflective surface. Because of its low albedo value, concrete can raise the local ambient temperature in urban areas, which is known as the urban “heat-island” effect. Several detrimental effects on economic prosperity, protection of natural systems, and the quality of life arise directly and indirectly from the heat-island effect. Currently in the United States, concrete is the prevailing construction material, being used for transportation, shelter, recreation, and industry applications, and its use is only expected to rise. A challenge to developed and industrialized nations such as the United States will be to discover a way to create concrete that possesses some environmental benefit. Project Objectives The premise on which this project is built is the P3 Concept of the Environmental Protection Agency. P3 stands for People, Prosperity, and Planet, which are the three pillars of sustainability. The EPA P3 Program encourages students to research and develop sustainable designs that make an impact in these three areas. In terms of affecting people, the heat island effect causes an increase in temperature in urban areas. This in turn increases air pollution because, for example, combustion processes at higher temperatures lead to increased emissions from power plants (Taha et al., 1998). In many areas of the nation, a warming of 2.2°C (4°F) could increase ground-level ozone (O3) concentrations by about 5% (U.S. EPA, 2000). Air quality monitoring data as of March 2006 show that 33 counties in Ohio, which have a total population of nearly 9 million, are designated as non-attainment areas for 8-hr average ozone due to their violation of National Ambient Air Quality Standards (NAAQS) (U.S. EPA, 2006a). Reducing the temperature in urban areas would alleviate ozone concentrations and improve local air quality. Economic prosperity is also affected by the urban heat island effect: it leads to an extra usage of electricity to cool down buildings, which results in larger air-conditioning bills. Simulations of the influence of pavement albedo on air temperature in Los Angeles predict that increasing the albedo of 1,250 sq km of pavement by 0.25 would save cooling energy worth $15 million per year, and reduce smog-related medical and lost-work expenses by $76 million per year (Rosenfeld et al., 1998). Most electricity generation in the U.S. comes from oil, natural gas, and coal-fossil fuels (Benka 2002). Any reduction in consumption of such non-renewable energy sources is an important step towards sustainable development. Because concrete is so widely utilized as a building material, the planet could be helped if innovations in manufacturing concrete materials were developed. The manufacture of cement consumes an enormous amount of energy as the raw materials must be heated to about 2700°F (Mindess et al., 2003). It also contributes to greenhouse gas emission. Worldwide, the manufacture of cement accounts for 6-7% of the total carbon dioxide (CO2) produced by humans (Green Resource Center, 2004). Thus, any reduction in the consumption of cement resulting from the use of alternative materials is beneficial to the environment.

220

Page 243: nasa / ohio space grant consortium 2005-2006 annual student

The goal of this project was to create a concrete mixture with higher solar reflectance for use in pavement applications using materials that would employ the environmentally conscious principles of P3. The concrete mixtures were designed with the following criteria in mind: economic feasibility, structural integrity, public safety, and sustainability. Two main objectives were aimed to be satisfied: 1) develop concrete mixtures with albedo values 60% higher than conventional concrete while meeting the performance specifications of the Ohio Department of Transportation (ODOT), and 2) utilize at least 10% cement replacement materials. Methodology An extensive review of the literature was performed to investigate the solar reflectance of concrete and methods to improve the property. In this research study, it was felt that the best way to promote high reflectance concrete was to innovatively modify concrete mixtures by incorporating materials with which the concrete industry was already accustomed. The main approach was to create a whiter concrete by replacing cement, which is the darkest ingredient in concrete, with whiter constituents. The two main constituents that were used to replace cement were fly ash and ground-granulated blast furnace slag (referred to as ‘slag’). Fly ash is a waste product of powdered coal after burning in power plants. It is known to improve several desirable properties of hardened concrete. These include higher ultimate strength, reduced permeability, reduced shrinkage, and increased durability (FHWA, 2005). Slag is a waste product from the blast furnace production of iron from ore. The benefits of slag in concrete include better paste-aggregate bond, higher strength, lower permeability, enhanced durability, improved skid resistance of pavement surface, improved resistance to chemical attack, and reduced heat generation in concrete. (NSA, 2004; SCA, 2006). Both fly ash and slag are recovered resources and produce many environmental benefits, which include saving precious space in landfills (TFHRC, 2004). Eleven different concrete mixtures were created for this project; the mixes and their proportions are displayed in Table 1. Mixes 1-3 were standard ODOT mixes, and they were used as the control mixes. Mix 2 contains 24% fly ash (by weight replacement of cement), and Mix 3 contains 30% slag. The effects of using fly ash and slag were studied by varying the amounts of the materials in the remainder of the mixes. Each mix was tested for solar reflectivity, compressive strength, flexural strength, and hydraulic cement setting time. After the results of Mixes 1-10 were obtained, Mix 11 was designed in an attempt to further improve the albedo of the concrete. For all of the mixes, #8 limestone was used as the coarse aggregate, and the fine aggregate used was ASTM C33 gravel. In each mix, standard admixtures were also utilized: high-range water reducer and air entrainment.

Table 1. Concrete Mixtures and Mix Proportions.

Mix Description Portland Cement

Fly Ash Slag Water Coarse Aggregate

Fine Aggregate

1 ODOT base mix 600 0 0 355 1410 1320 2 ODOT high performance concrete 1 (HP1)

– 24% fly ash 530 170 0 263 1480 1310

3 ODOT high performance concrete 2 (HP 2) – 30% slag

490 0 210 264 1495 1330

4 ODOT HP2 with 30% fly ash 490 210 0 245 1495 1330 5 ODOT HP1 with 60% fly ash 280 420 0 233 1480 1310 6 ODOT HP2 with 60% slag 280 2 420 258 1495 1330 7 ODOT HP2 with 40% fly ash and 20% slag 280 280 140 233 1495 1330 8 ODOT HP2 with 20% fly ash and 40% slag 280 140 280 245 1495 1330 9 ODOT base mix with white sand 611 0 0 355 1410 1320 10 ODOT base mix with latex (latex is 24%

solids) 600 0 0 131 1410 1320

11 ODOT HP2 with 70% slag 210 0 490 256 1495 1330 In order to test the reflectivity of each mix, 2 in concrete cubes were sent to PRI Asphalt Technologies in Tampa, FL. In order to meet the project objective, albedo values of 60% above those of conventional concrete were desired. The mechanical properties of each mix, including compressive strength and flexural strength, were tested in-house. ODOT specifies that concrete pavements have a 28-day

221

Page 244: nasa / ohio space grant consortium 2005-2006 annual student

compressive strength of 4000psi and a flexural strength of 600psi, so it was desired that the experimental mixes met these specifications (ODOT 2005). Concrete cylinders, measuring 4” in diameter by 8” in height, were used to test compressive strength. Concrete beams, having approximate 6” x 6” cross sections and measuring approximately 24” in length, were used to test flexural strength. Results The percent difference in albedo values of the 11 mixes from the base mix (Mix 1) are presented in Figure 1. The following trends can be observed: • Concrete mixes with fly ash (Mixes 2, 4, 5) have lower albedo than the conventional mix. • Concrete mixes with slag (Mixes 3, 6, 11) have higher albedo than the conventional mix. • The albedo of ternary concrete mixes (Mixes 7 and 8) depends on the proportion of fly ash and slag

used. A higher proportion of fly ash tends to decrease concrete’s albedo while a higher proportion of slag tends to increase concrete’s albedo.

• Whitish ingredients in concrete increase its albedo. • The variation of albedos within a mix is higher in ternary mixes than the mixes with just one cement

replacement. This is also true for concrete with white sand and latex. The compressive strengths of all of the mixes greatly exceeded the minimum ODOT specification after just 7 days. After 28 days, the strengths ranged from nearly 6000psi to over 9000psi. In all but two of the mixes (Mix 5 and Mix 8), the minimum ODOT specification for the modulus of rupture was exceeded. Mix 11, which possessed 70% slag, displayed a 71% increase in albedo over Mix 1 (albedo of 0.58 in Mix 11 compared to 0.34 in Mix 1). Two quantifiable project objectives were exceeded by this mix: providing 60% higher albedo value and using at least 10% cement replacement materials. The use of white sand or latex does not improve the albedo as much as the use of 60% or more slag. This finding, coupled with the fact that white sand is more expensive than slag, affirms the use of high slag content as a very attractive solution to produce high albedo and environmentally friendly concrete. Significance and Interpretation of Results Inspection of the results reveals that Mix 11, with 70% slag cementitious content, would yield a high-albedo and environmentally friendly concrete. Using a high-slag concrete would be beneficial to people, prosperity, and planet. Pavements constructed with high albedos will alleviate the heat island effect, which will cause reduced cooling costs, help reduce smog, and reduce heat-related illnesses and deaths. In Phoenix, Arizona, measurements of the albedo of pavements indicated that an increase in albedo of 0.1 reduces the pavement surface temperature by 4.7°C (Golden 2005). Based on that observation, a 70% slag concrete could reduce pavement surface temperature by 11.3°C. A concrete pavement that has a higher solar reflectance will not absorb as much heat and therefore experience less thermal stress. To further enhance the knowledge and understand any possible issues of using high albedo concrete, it is recommended that comprehensive evaluations be conducted on other desirable properties of concrete e.g. permeability and freeze-thaw durability; changes of concrete albedo in real world conditions; and its impacts on surface temperature in different climate conditions.

Percent Difference in Albedo Compared to Base Mix (Mix 1)

71%

15%

35%

24%

-10%

57%

-28%

-39%

6%

-22%

0%

0

0

0

0

0

0

0

0

1 2 3 4 5 6 7 8 9 10 11

Mix Number

Figure 1. Percent Difference in Albedo Compared to Base Mix (Mix 1).

222

Page 245: nasa / ohio space grant consortium 2005-2006 annual student

Acknowledgments The author extends his gratitude to his project advisor, Dr. Farhad Reza, co-advisor Dr. Kanok Boriboomsomsin, and fellow civil engineering students Audrey Seals, Naomi Schmidt, and Brandon Strohl. References 1. Benka, S.G. (2002). The energy challenge. Physics Today, Vol. 55, No. 4, p. 38, American Institute

of Physics. http://www.aip.org/pt/vol-55/iss-4/p38.html, accessed February 20, 2006. 2. FHWA (2005). Fly ash facts for highway engineers. Federal Highway Administration,

http://www.fhwa.dot.gov/pavement/recycling/fach01.cfm, accessed February 20, 2006. 3. Golden, J. S. (2005). A meso-scale to micro-scale evaluation of surface pavement impacts to the

urban heat island – aestas hysteresis lag effect. http://caplter.asu.edu/docs/smartWebArticles/GoldenJMesoScaleHystersisLagArticle.pdf, accessed February 20, 2006.

4. Green Resource Center (2004). High volume fly ash concrete. http://www.greenresourcecenter.org/MaterialSheetsWord/FlyAshConcrete.pdf, accessed February 20, 2006.

5. Mindess, S., Young, J. F. and Darwin, D. (2003). Concrete, 2nd edition. Prentice Hall, Upper Saddle River, New Jersey.

6. NSA (2004). Blast furnace slag: the construction material of choice. National Slag Association, http://www.nationalslagassoc.org/PDF_files/NSABlastFurn.PDF, accessed February 20, 2006.

7. ODOT (2005). Construction and material specifications: section 499, 451 and 452. Ohio Department of Transportation.

8. Rosenfeld, A. H., Akbari, H., Romm, J. J., and Pomerantz, M. (1998). Cool communities: strategies for heat island mitigation and smog reduction. Energy & Buildings, 28(1), pp. 51-62.

9. SCA (2006). Slag Cement Association http://www.slagcement.org, accessed March 10, 2006. 10. Taha, H., Konopacki, S., and Akbari, H. (1998). Impacts of lowered urban air temperatures on

precursor emission and ozone air quality. Journal of the Air & Waste Management Association, 48 (9), pp. 860-865.

11. TFHRC (2004). Blast furnace slag. Turner-Fairbank Highway Research Center, http://www.tfhrc.gov/hnr20/recycle/waste/bfs1.htm, accessed February 20, 2006.

12. U.S. EPA (2000). Global warming – impacts: health. U.S. Environmental Protection Agency, http://yosemite.epa.gov/oar/globalwarming.nsf/content/ImpactsHealth.html, accessed February 20, 2006.

13. U.S. EPA (2006a). AirData: access to air pollution data. U.S. Environmental Protection Agency, http://www.epa.gov/air/data/index.html, accessed February 20, 2006.

223

Page 246: nasa / ohio space grant consortium 2005-2006 annual student

Teaching Resources Easier to Find

Student Researcher: Bud L. Strudthoff

Advisors: Dr. Gary Slater and Ms. Julie Thompson

University of Cincinnati College of Education, Criminal Justice, and Human Services

Department of Teacher Education Abstract Teaching to match the academic standards of the state of Ohio, or any particular school district may be at times difficult. It must be said that in order to teach you must first know. A lot of pre-service and current teachers have little understanding about space, NASA, and space exploration. By making resources that align with academic standards more readily available for pre-service and current teachers on the topics of astronomy and space studies a better understanding, and hopefully and increased passion to teach these topics will also follow. Since the students of today are the leaders of tomorrow we cannot leave such a large void in what is taught and understood in a topic matter of the highest importance. Project Objectives To create a resource file which will align content standards with grade level indicators to assist pre-service and current teachers in their instruction relating to astronomy and space studies. Once created the file is to be distributed to pre-service teachers in hopes that they will use, and share, the resource with future co-workers. With more people being better equipped to teach on the subject of astronomy and space science, students at a very young age can develop a love of astronomy and space science. Methodologies Used A brief survey was sent to 20 pre-service early childhood and middle childhood teachers. Once the surveys were returned I noticed that above all, pre-service teachers did not know a specific area to obtain lessons that align with the content standards for the state of Ohio for astronomy. Initially I had hoped my project would focus on innovative ways to instruct students on the ways in which Mars and other planets are more closely studied. However, upon receiving the results of the survey I felt that it was more important to create a resource file of lesson plans that would fulfill state requirements and grade level indicators to assist pre-service and current teachers in their teaching of astronomy. The packet contains websites to enhance subject knowledge as well as successful lesson plans that meet the academic standards and grade level indicators for the state of Ohio. This is a resource that I have shared with the 20 students surveyed, and plan to continue to distribute and update as academic standards and grade level indicators change. Results Following the distribution of the information packets many of the pre-service students surveyed expressed interest in keeping their packet for use in their future classroom. A resource which pulls together information that at times seems rather broad, and breaks it into specific categories will save a lot of planning and research time for pre-service and current teachers in the field. References 1. http://www.ode.state.oh.us/academic_content_standards/ 2. www.ohiorc.org 3. www.nasa.gov 4. Pre-service freshmen, sophomores, and juniors at the University of Cincinnati, College of Education,

Criminal Justice, and Human Services.

224

Page 247: nasa / ohio space grant consortium 2005-2006 annual student

Impact of Inquiry Teaching Strategies Upon Student Learning

Student Researcher: Elizabeth M. Thorndike

Advisor: Sandra Thorndike

Youngstown State University Departments of Math, Science and Education

Abstract In addition to fostering students’ understanding of the nature of science, the development of science processes, and the principles of science, a middle level science educator must address scientific content knowledge and processes which feature technological design, inquiry, scientific ways of knowing, and connections across the domains of science. A middle level science classroom must assume an inviting, supportive, and safe environment so that students may practice using appropriate scientific processes and principles. To promote positive attitudes and behaviors that will motivate all students to tackle the many challenging learning activities that inquiry science presents, middle school educators and students must hold themselves and each other to high expectations. Nationwide efforts to increase the quality and effectiveness of science education have been made by the partnership of specialists and highly qualified teachers; they have developed a research-based approach to science education in order to provide a method to ensure accurate and systematic improvement to the education system. These research-based resources have been implemented through the Ohio Resource Center (ORC) to pilot districts where teachers use submitted materials for curriculum and assessment. Project Objectives The main objective for this project was to gauge student learning through the use of inquiry science teaching methods based on pre- and post assessment data. Also, I wanted to have an overall knowledge of how inquiry science classrooms would enable students to learn science increasingly well. Methodology Used This project was conducted during a field experience in a class of 21 seventh graders, in a science classroom at St. Charles School of Boardman, Ohio. The project was in the form of a unit that utilized inquiry learning at various points and was taught over a period of five weeks. The first part of the research involved discovering students’ prior knowledge (PK) of cells, cell structures, and cellular processes, identifying misconceptions, and identifying what the students wanted to learn about cells, cell structures, and the life processes of cells. Throughout the pre-assessments, various tasks were given in order to use multiple levels of Bloom’s Taxonomy: draw a cell – synthesis level – required students to create a cell as they understand it to be: generate questions to make a KWL chart – knowledge and comprehension levels – required students to write what they knew and then expand upon that to include concepts that they want to learn about: questionnaire – knowledge, comprehension and application levels – asked students to identify relationships between terms of processes and life-sustaining functions that they provide. The results of the pre-assessments were recorded and analyzed. Throughout the teaching of this unit students were given multiple opportunities to engage in inquiry learning. Shortly after a brief discussion about cells, students were assigned to create and present an original caricature of a cellular organelle. This caricature had to outwardly portray qualities that it shared with the organelle. Another inquiry study involved the use of a Venn diagram to compare plant and animal cells in order to discover more similarities than differences between the two. A third inquiry activity required students to investigate unfamiliar cellular terms and sketch an image that would help them remember its meaning. The final inquiry learning activity involved cell division and required the students to act out the phases of cell division kinesthetically. The whole class was given this one assignment and was to assign roles and determine the best way to physically perform cell division.

225

Page 248: nasa / ohio space grant consortium 2005-2006 annual student

Results All students made progress toward each learning goal; the post assessments for each learning goal included the completion of 1) cell diagrams, 2) a function chart, 3) a quiz, and 4) a cell division activity. With respect to the first and second post assessment, 2 students made substantial progress toward achievement, and 19 students satisfied the criteria. Upon their completion of the quiz, 4 students made substantial progress toward achievement, and 17 students met the criteria. After evaluating the participation of each student in the cell division activity, it was determined that 2 students made substantial progress toward the criteria and 19 students met the criteria. Significance and Interpretation of Results The first pre-assessment was the drawing of a cell. Four students drew only the outer boundary of a cell; 15 students created cell drawings showing a cell membrane, nucleus and region of cytoplasm; 2 students drew both plant and animal cells that featured additional important cell structures. The corresponding post assessment was two unlabeled diagrams of plant and animal cells and a chart of structures and functions. All students made progress toward this goal – 2 students with 70% to 89% accuracy, and 19 students with at least 90% accuracy upon their completion of the diagrams and chart. The second pre-assessment was a KWL chart that each student created. Three students recorded very few bits of PK and only had 1 question in their W column; 17 students showed a good basis of PK and generated at least 4 more questions in their W column; and 1 student provided elaborate detail about her PK and generated specific questions to investigate in her W column. The corresponding post assessment was a quiz wherein all students again made progress toward this learning goal – 4 students with 70% to 89% accuracy, and 17 students with at least 90% accuracy upon completing the quiz. The third pre-assessment was a questionnaire of six questions about the processes of a cell. Six students answered four or less questions correctly; 15 students answered five of the questions correctly; and none of the students answered all six questions correctly. The corresponding post assessment was in the form of a quiz wherein 4 students made unsubstantial progress with less than 69% accuracy, 9 students made substantial progress with 70% to 89% accuracy, and 8 students met the criteria with at least 90% accuracy. The fourth pre-assessment was a KWL chart of cellular processes that each student created. The students brainstormed a few items for the K and W columns together and then recorded as many pieces of PK as each had while developing questions they wanted to answer. Five students recorded very few pieces of PK and only had 1 question; 16 students showed a good basis of PK and generated at least 4 more questions in their W column; and no students provided elaborate detail about PK or questions to investigate. The corresponding post assessment was in the form of a two-day activity. On day one the students investigated diffusion and osmosis through recording observations and critical thinking. On day two, the students engaged in a kinesthetic activity to act out the phases of mitosis. All students made progress toward achieving the criteria for this assessment – 2 students made substantial progress with 70% to 89% accuracy, and 19 students met the criteria with at least 90% accuracy. A subgroup was chosen for analysis in order to more thoroughly analyze student learning. The subgroup, gender, was not evenly split among the whole group so I needed to compare them by what percentage of each group achieved the criterion for each goal. There were 10 girls and 11 boys in the class. In pre-assessment 1, 1 girl and 3 boys portrayed up to 69% accuracy in their cell drawings; 8 girls and 7 boys were 70% to 89%; and 1 girl and 1 boy showed over 90% accuracy. The results of post assessment 1 showed that no students showed unsubstantial progress; 1 girl and 1 boy made substantial progress with 70% to 89% accuracy on their diagrams and chart; and 9 girls and 10 boys showed achievement with at least 90% accuracy. The second pre-assessment involved the KWL chart and showed 3 boys having up to 69% understanding of cell structures and functions; 9 girls and 8 boys showed 70% to 89% PK, and 1 girl showed over 90% of the expected PK in this pre-assessment. Once again, all students showed progress toward the

226

Page 249: nasa / ohio space grant consortium 2005-2006 annual student

0%

20%

40%

60%

80%

100%

Girls Boys Girls Boys

Pre-Assessment 2 Post Assessment 2

0%

20%

40%

60%

80%

100%

Girls Boys Girls Boys

Pre-Assessment 1 Post Assessment 1

Perc

enta

ge o

f Girl

s an

d B

oys

90% to 100%70% to 89%0% to 69%

0

5

10

15

20

Pre-Assessment 3

PostAssessment 3

Pre-Assessment 4

PostAssessment 4

0

5

10

15

20

Pre-Assessment 1

PostAssessment 1

Pre-Assessment 2

PostAssessment 2

Num

ber o

f Stu

dent

s

0% to 69%

70% to 89%

90% to 100%

achievement of the learning goal. Four boys made substantial progress toward the goal by meeting the criteria with 70% to 89% accuracy, and ten girls and seven boys showed achievement of the learning goal with at least 90% accuracy. No students showed unsubstantial progress toward the learning goal. Figures Whole Group Assessment Data

Subgroup: Gender: Comparing the percent of accuracy of girls versus boys to meet the criteria of each assessment

Acknowledgments This research would not have been possible without the guidance of Middle School science educator, Dr. Janet Williams of Youngstown State University, Mrs. Sandra Thorndike, NASA advisor of YSU, the Middle School Internship opportunity provided by Youngstown State University with the cooperation of experienced teachers, Mrs. Debbie Beasley and Mrs. Jan Weitzman, at St. Charles School, the NASA Ohio Space Grant Consortium, and the ORC workshop leader. References 1. www.ohiorc.org 2. www.ode.state.oh.us 3. Teaching Science as Inquiry: Arthur A. Carin, Joel E. Bass, Terry L. Contant: 10th Edition 4. Ohio Academic Content Standards 5. National Science Education Standards

227

Page 250: nasa / ohio space grant consortium 2005-2006 annual student

Dynamics of CVT-Based Hybrid Vehicles

Student Researcher: Henry Tran

Advisor: Dr. Amit Shukla

Miami University Department of Mechanical and Manufacturing Engineering

Abstract Continuous Variable Transmission (CVT) is an automatic transmission without discrete gear shifting. This study is to improve the dynamics of a CVT-based hybrid vehicle. These vehicles have significant advantages in high gas mileage, shift shock elimination, low fuel emission, and high drive acceleration. In this project, a drive-train model was analyzed via simulation results to show the parametric effects on the performance of the CVT in terms of reducing the torque fluctuations and jerks. Project Objective The goal of the project is to minimize and to reduce the torque fluctuations so that passenger the comfort level is optimized for a CVT-based hybrid transmission. This will lead to overall enhancement in the performance of the CVT hybrid vehicle. Methodology Used The CVT drive-train model (as shown in Figure 1) describes the system dynamics and includes the effects of deformation, rate of change in deformation, CVT ratio, and angular velocity and has several quasi-static parameters which affects the performance of this system. Parametric simulations results are used to analyze the effects in the changes of the parameters and input (CVT ratio). Both the transient and steady state solutions are studied for short term and long term impacts on the system dynamics.

Figure 1. —Drive-train Simulation Model— The CVT system model [1, 2] is a closed loop state space model with four states including

Tflw

T ixxxxx ][][ 4321 ϕεε &&== which represent deformation, change in deformation, CVT ratio, and angular velocity respectively. The state space model is given by ).,(),( uxgwxfx +=& The specific system matrix is represented by the coupled nonlinear equations as given in Eq. 1,

( ) ( )

( )

,

10

1

),,(

12132

4

22243

13243

23

21

2

⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢

⎟⎟⎠

⎞⎜⎜⎝

⎛++−−

+−++−⎟⎟⎠

⎞⎜⎜⎝

⎛++−

=

wxbxkxxbJ

Jwxxx

Jb

Jwx

Jxxb

JJxxbxk

x

twxf

ttflwflw

vehveh

veh

flwflw

flw

vehflwtt

η

η (1)

228

Page 251: nasa / ohio space grant consortium 2005-2006 annual student

where [ ] ,, Textengine TTw = and .]0,,,0[),( 4

Tuuxuxg = Further, specific nominal values for all the quasi-static parameters are given in Table 1.

Table 1. —Nominal Values of System Parameters—

tk (Spring stiffness)

tb (Damping coefficient)

η (Efficiency) flwJ (Inertia of flywheel)

vehJ (Inertia of vehicle)

1000 N/m 10Ns/m 0.87 0.4kgm2 135.43kgm2

vehb (Damping coefficient of

vehicle)

engineT (Torque) extT (Torque) u (CVT ratio) flwb (Damping

coefficient of flywheel)

0.0113Nms2 100Nm 201.093.59 ϕ&×+ 0.1 0.0113Nms2

Since the model contains the CVT ratio as input, torque fluctuations was simulated and CVT ratio was

).sin( tACu Ω+= The parametric studies were conducted by changing the nominal values of system parameters within the %100± range. Also, the amplitude (A) and angular frequency (Ω) of CVT ratio was modified to observe any overall changes in transient and steady state response. Results Obtained Simulations show that by adding a constant C to the modified CVT ratio equation, )sin( tACu Ω+= , shows a short term reduction in torque fluctuation and jerk, specifically in the angular velocity of the flywheel. This can be shown in Case: 4 where the constant C equals 0.01. In addition to this the system is unstable for situations when the constant term (C) =0 such as in case 2. Further studies are needed to design control systems for the CVT-based drive. Note that: Case 1: C = 0.1, A = 0, Ω = 0. Case 2: C = 0, A = 0.1, Ω = 1. Case 3: C = 0.1, A = 0.1, Ω = 1. Case 4: C = 0.01, A= 0.1, Ω =1. References 1. Spijker, E. (1994). Steering and

control of a CVT based hybrid transmission for a passenger car. Netherlands: CIP-Gegvens Koninklijke Bibliotheek, den haag.

2. Veldpaus, F. E., & Shen, S. (2004). Analysis and control of a flywheel hybrid vehicular powertrain. IEEE Transactions on Control Systems Technology, 12(5), 645-660.

0 10 20 30 40 50 60 70 80 90 100-10

-8

-6

-4

-2

0

2

4

6

8

10

Time(s)

Ang

ular

Vel

ocity

(rev

/s)

Angular Velocity (rev/s) vs. Time (s)

Case 1

Case 2

Case 3

Case 4

Figure 2. Angular velocity of the drive for )sin( tACu Ω+= , where: Case 1: C = 0.1, A = 0, Ω = 0; Case 2: C = 0, A = 0.1, Ω = 1; Case 3: C = 0.1, A = 0.1, Ω = 1; Case 4: C = 0.01, A= 0.1, Ω =1.

229

Page 252: nasa / ohio space grant consortium 2005-2006 annual student

Experiment Validation of a Precision Gear Pump

Student Researcher: William B. Tutor

Advisor: Dr. Hazel Marie

Youngstown State University Mechanical and Industrial Engineering

Abstract A hydraulic system works at the highest efficiency only if each component also functions at the highest efficiency. Designing products that normally operate at the highest optimum efficiency or performs with significant improvement in efficiency is the ultimate goal of designers and engineers. Since the improvement in efficiency of fluid power devices requires knowledge of flow characteristics and a thorough understanding of flow mechanism, the accurate analysis of flows has been sought after for many years to achieve the above mentioned goal. Research in the numerical modeling and optimization of precision gear pump design is currently being conducted at YSU. Validation of the models is a necessity. For this research, a Parker 600 series gear pump will be instrumented with pressure transducers and thermocouples for the purpose of mapping the pump’s thermo-fluid properties. Project Objectives The aim of this research project is to instrument an existing Parker Series 600 Pump in order to validate a numerical analysis experimentally. One aspect of the project is generating a revised CAD drawing of the gear pump and to machine it for the necessary pressure transducer. This requires the choice of where to instrument the pressure transducer as well as the choice of a proper pressure transducer and flow meters that will gather data via the data acquisition system. The data acquired will then be compared to the numerical model for verification. Methodology Used Many ideas as to the proper location and instrumentation of the pressure transducer were discussed. The ideal location was chosen to be midway down the gear near at the root of the teeth. As seen in Figure 3 this location is ideal as we may map the pressure throughout the gears 360o rotation which goes through discharge to the high pressure output and back around with the pressure reducing to the suction side of the gear pump. This project also required the calibration of the chosen pressure transducer as well as an ample data acquisition system. Results Obtained Results have been obtained for a 2 Dimensional numerical model. Figure 1 shows the gear pump’s housing at the chosen location to monitor the static pressures. One point of interest to validate with the experiment is shown in Figure 2, while the static pressure distribution is shown in Figure 3.

Figure 1. Model of Housing Showing Pressure Transducer Hole.

Transducer Hole

230

Page 253: nasa / ohio space grant consortium 2005-2006 annual student

Figure 2. Model of Gear Showing Tooth Root Pressure Discharge Location.

Figure 3. Static Pressure Distribution in Numerical Model.

References 1. Fluent 6.1 Manual, Gambit Modeling Guide, Dynamic Meshing Tutorials. 2. Santosh Kini, et al., “Numerical Simulation of Cover Plate Deflection in the Gerotor Pump”, 05AE-

185, 2004 SAE International. 3. Ivantysyn and Ivantysynova, “Hydrostatic Pumps and Motors”, Tech Books International. 4. Manring, N.D. and Kassaragadda, S. B., “Theoretical Flow Ripple of an External Gear Pump”, Trans.,

ASME, V.125, September 2003.

Tooth Root Discharge Pressure

231

Page 254: nasa / ohio space grant consortium 2005-2006 annual student

Neutron Stability Derived from an Electrodynamic Model of Elementary Particles

Student Researcher: Emily M. Van Vliet

Advisor: Dr. Gerald Brown

Cedarville University Elmer W. Engstrom Department of Engineering

Abstract A geometrically realizable form of the neutron based on the electrodynamic Charge Fiber Model developed by Bergman and Lucas has a binding energy of 5.0305x10-13 Joules (Bergman, 2006), sufficiently large for long-term stability in the vacuum existing in most of space, and sufficiently small for short-term beta decay in earth laboratories where particle pressure from bombardment is significant. Prior Models of the Neutron Democritus of Greece (460 B.C.) is generally credited with the idea that the universe is composed of tiny building blocks of matter (Lucas, 2004). From the time of Democritus to modernity, models have been evaluated by their capacity to accurately characterize and describe known particles, and to predict the undiscovered. Historically, the Standard Model of elementary particles has been evaluated by its ability to describe particle decay such as neutron decay (Weitfeldt et al., 2005). The simplicity of the neutron makes it particularly attractive for such studies (Gudkov et al., 2006; Nico et al., 2005; Wilburn et al., 2005). According to Dewey et al., “…the free neutron plays a crucial role in understanding the physics of the weak interaction and testing the validity of the Standard Model” (2003). The Charge Fiber Model was one of the first of the classical electrodynamic closed string or ring models of elementary particles (Bergman 1990). In this paper the same principle can be applied to test the validity of the Charge Fiber Model. Project Objectives In 1917, Arthur Compton performed a series of experiments on the size and shape of the electron indicating that the electron consists of thin flexible rings of charge. Later, in 1966, Winston Bostick (one of Compton’s last graduate students) hypothesized that the electron was like a plasmoid—a spring shaped fiber connected end-to-end to create a deformable toroid (Bostick, 1991). Charles Lucas expanded upon this work by proposing a Classical Electromagnetic Theory of Elementary Particles which posits fundamental particles consisting of 3 to 27 spiraling charge fibers in configurations determined by combinatorial geometry (the geometry of packing and covering) (2004). This report presents the fundamental concepts of the Charge Fiber Model, examines its ability to predict neutron lifetime, and examines its predictions with respect to the decay of other particles. It then compares the results to those offered by the Standard Model for the purposes of examining the viability of the Charge Fiber Model and introducing the audience to an elegant alternative to the Standard Model. Methodology and Results Starting with a Toroidal Model, Bergman and Lucas built on the earlier concept that the proton and electron were simple hollow rings of spinning charge (Bergman, 1990). The model was modified when it was realized that the circumferentially spinning charge must also twist around the body of an imaginary toroid in order to be stable. This spiral structure, initially described as a Helicon, is more generally identified as a charge fiber. Lucas refined the Charge Fiber Model to include 3 primary charge fibers, which may in turn be composed of secondary and even tertiary fibers, each with a charge of –e/3, +e/3, or 0. His work shows how combinations of these fibers can be physically correlated to the numerous quarks, leptons, and hadrons in the Standard Model. Figure 1 shows the structure of a hypothetical particle comprised of a single charge fiber, showing the helical or twisting form of the fiber. According to this model, at least all hadrons are formed by an odd number of charge fibers. During the decay process or in any interaction with other particles, the helicity or spin combination of charge fibers is

232

Page 255: nasa / ohio space grant consortium 2005-2006 annual student

conserved and the number of the charge fibers is conserved. In contrast, the Standard Model only conserves net charge (Lucas, 2004). Lucas and Bergman also propose that the laws of electrodynamics and mechanics apply on all size scales including elementary particles. The physical nature of this model explains the four fundamental forces as manifestations of the classical electromagnetic force (Lucas, 2004). The Proton: The ring structure of the proton is similar to that of the electron, except for a significantly smaller radius and a net charge of +e. Closer examination of Figure 2 shows that the much greater mass and energy of the proton are due to the compound structure of its fibers, being composed of one primary fiber (↓) and two secondary fibers (⇑⇑). The -e/3 charge of the primary fiber cancels +e/3 of the charge of one of the secondary fibers, leaving a net charge of +e, but the tightly bound configuration of the additional fibers involves significantly more binding energy. Consequently, the proton has much more mass than the electron. The Neutron: Similar to the Standard Model, the Lucas Charge Fiber Model proposes a neutron to be a combination of three fibers: a proton, a neutrino, and an electron. Bergman, in contrast, proposes that a neutron is composed only of a proton and an electron. This configuration consistently conserves charge, mass, and energy, but doesn’t require a neutrino in the decay process. He models the proton and electron of the neutron as coplanar rings of charge, a magnetically coupled unit arranged as concentric rings. The electric and magnetic attraction of the electron and proton causes the radius of the proton to be 183% larger than when unbound and the electron to be two orders of magnitude smaller, creating a much smaller configuration than an electron loosely bound to a proton, or a single electron in free space. Figure 3 shows the relationship between the radius of the electron and proton as a function of the planar separation between them along a common axis. Detailed simulations with Mathematica demonstrate that an energy minimum exists when the electron and proton are coaxial and coplanar. They will remain in that position unless disturbed by an outside force that overcomes the binding energy of 5.0305x10-13 Joules. It is proposed that decay occurs when the neutron is bombarded by other particles such that the proton is relocated (moved) sufficiently outside of the potential energy minimum. Then the binding force is greatly diminished and the proton and electron separate (Bergman, 2002). Because the stable region for the neutron is limited to a small fraction of the radius of the bound electron, a small axial displacement is often sufficient to cause the neutron to decay (Bergman, 2006). A unique feature of the Charge Fiber Model is its ability to predict nuclear decay using knowledge of Nuclear Binding Energies (NBEs) which result from the individual interactions between protons and electrons that are grouped within the nucleus. The Lucas-Bergman model of the nucleus consists of nested shells of electrons or protons. These electrons and protons are grouped in triplets (two protons, one electron) where an inner ring consists of protons, the next ring of electrons, then protons again and so on. Ed Boudreaux and Eric Baxter have created a simulation program to find the NBE’s. The program hypothesizes the location of neutrons and protons within a nucleus and uses an iterative solution to gradually and methodically shift the location of the neutrons and protons to determine the exact geometric configuration where the particles have the lowest energy, and hence the configuration where the nucleus is most stable. Boudreaux, as reported by Bergman, calculated NBE’s and used this information to confirm the nuclear structure of many elements by deriving the theoretical half-lives of each isotope shown in Figure 4. As Boudreaux and Baxter have noted, their method differs from standard radiometric decay dating methods which assume only one shell structure is generated in the decay process. An unexpected and fascinating observation reported by Boudreaux and Baxter is the occurrence of two different shell structures (with one electron being in shell 2 or shell 3), with drastically different half-lives, for the same isotope of 40K. In addition to the reported half-life of 1.3 billion years, there is a second stable arrangement with a local

233

Page 256: nasa / ohio space grant consortium 2005-2006 annual student

energy minimum showing a half life of only 15 hours (Bergman, 2002). This shorter half-life of potassium will never be noticed unless researchers specifically look for it in newly produced 40K. As reported by Bergman (2002), Boudreaux and Baxter have calculated the NBEs for a series of particles, using the Charge Fiber Model, approximating the protons and electrons as loops of charge with finite dimensions, charges, and magnetic moments. The equation derived by Boudreaux and Baxter used this information to predict the NBE. The research accurately reproduced observed nuclear spins for all simulated isotopes. This very significant result is a strong indicator of the Charge Fiber Model’s accuracy and usefulness. The Mayer Quantum Shell Model only reports 65 to 70 percent accuracy in calculating observed nuclear spins (Lucas, 2004). Significance Lucas, Bergman, Boudreaux, and Baxter’s work with the Charge Fiber Model offers an important contribution to the field of particle physics. While the ring basis of this physical electromagnetic model of fundamental particles is not new, the more detailed Charge Fiber Model and recent developments using numerical methods to perform the detailed energy calculations offer an alternate view of atomic structures by associating them with a geometrically realizable form. The Charge Fiber Model’s success to date demonstrates the potential for future application of the model in describing and predicting the nature and interactions of atomic particles. Figures and Tables

Figure 1. Single fiber toroidal Charge Fiber Ring Model (Bergman, 2002).

Figure 2. Examples of Charge Fiber Model Particle structure (Lucas, 2006).

0 2·10-17 4·10-17 6·10-17 8·10-170

5·10-16

1·10-15

1.5·10-15

2·10-15

2.5·10-15

Figure 3. Radii of the electron and proton within a neutron vs. their coaxial separation (Bergman, 2006).

Figure 4. Half-lives based on calculated nuclear binding energy vs. reported half-lives (Bergman, 2002).

234

Page 257: nasa / ohio space grant consortium 2005-2006 annual student

References 1. Bergman, David S. “Fine Structure Properties of the Electron, Proton, and Neutron.” 2006.

Foundations of Science. Feb. 2006. <http://www.commonsensescience.org/pdf/pdf/fine_stucture_properties_ LoRes_3-2-06_FoS_V9N1.pdf>

2. Bergman, David L. and Wesley, J. Paul. “Spinning Charged Ring Model of Electron Yielding Anomalous Magnetic Moment.” Galilean Electrodynamics, Volume 1, pp. 63-67, 1990.

3. Bergman, David L. “Nuclear Binding and Half-Lives.” 2002. Foundations of Science. Sept. 2005. <http://commonsensescience.org/pdf/pdf/nuclear_binding_half-lives.pdf>

4. Bergman, David L. “A Theory of Forces.” Kennesaw: Common Sense Science, Inc., 2002. 5. Bostick, W. H. (1991). Mass, Charge, and Current: The Essence and Morphology. Physics Essays

Volume 4, Number 1. 6. Dewey, M. S.; Gilliam, D. M.; Nico, J. S. “Measurement of the Neutron Lifetime Using a Proton Trap.”

Physical Review Letters, 2003. 7. Gudkov, V.; Greene, G. L.; Calarco, J. R. (2006). General classification and analysis of neutron B-

decay experiments. Physical Review C, 73. 8. Lucas, C. W. A Classical Electromagnetic Theory of Elementary Particles. Foundations of Science,

2004. Retrieved September 2005 from <http://commonsensescience.org/pdf/pdf/elementary_particles_part_1_FoS_V7N4.pdf>

9. Lucas, Charles W. Jr. and Joseph Lucas. “A Physical Model for Atoms and Nuclei, Part 2.” Galilean Electrodynamics. Jan/Feb 1996.

10. Lucas, Charles W. Jr., “A Classical Universal Electrodynamic Force”, Proceedings of the Natural Philosophy Alliance Annual Meeting, Tusla, OK. Feb. 2006.

11. Nico, J. S.; Dewey, M. S.; Gilliam, D. M.; Weitfeldt, F. E.; Fei, X.; Snow, W. M.; Greene, G. L.; Pauwels, J.; Eykens, R.; Lamberty, A.; Van Gestel, J. & Scott, R. D. (2005). Measurement of the neutron lifetime by counting trapped protons in a cold neutron beam. Physical Review C, 71.

12. Weitfeldt, F. E.; Fisher, B. M.; Trull, C.; Jones, G. L.; Collet, B.; Goldin, L.; Yerozolimsky, B. G.; Wilson, R.; Balashov, S.; Mostovoy, Y.; Komives, A.; Leuschner, M.; Byrne, J.; Bateman, F. B.; Dewey, M. S.; Nico, J. S. & Thompson, A. K. (2005). A method for an improved measurement of the electron-antineutrino correlation in free neutron beta decay. Nuclear Instruments and Methods in Physics Research A, 545.

13. Wilburn, W. S.; Bowman, J. D.; Mitchell, G. S.; O’Donnell, J. M.; Penttila, S. I. & Seo, P. N. (2005). Measurement of Neutron Decay Parameters – the abBA Experiment. Journal of Research of the National Institute of Standards and Technology, 110. pp. 389-393.

Acknowledgments

• Dr. Gerald Brown for his continual support, effort, and guidance in this project. • Dr. Charles Lucas, Mr. David Bergman, Dr. Edward Boudreax, Mr. Eric Baxter, and Dr. Glen

Collins for their assistance in research an understanding this material. • Mr. Chuck Allport for organizing the OSGC scholarship program at Cedarville University. • The OSGC, Dr. Kenneth DeWitt, and Laura Stacko for their work in organizing and facilitating this

educational opportunity.

Represents an -e/3 charge fiber loop. Represents an +e/3 charge fiber loop. Represents two -e/3 charge fibers intertwined with left-handed helicity to act as a larger charge fiber. Represents two -e/3 charge fibers intertwined with right-handed helicity to act as a larger charge fiber. Represents two +e/3 charge fibers intertwined with left-handed helicity to act as a larger charge fiber. Represents two +e/3 charge fibers intertwined with right-handed helicity to act as a larger charge fiber. Represents one +e/3 and one -e/3 charge fiber intertwined with left-handed helicity to act as a larger charge fiber. Represents one +e/3 and one -e/3 charge fiber intertwined with right-handed helicity to act as a larger charge fiber.

235

Page 258: nasa / ohio space grant consortium 2005-2006 annual student

AirBorne Laser Scanner Feature Extraction

Student Researcher: Don T. Venable

Advisor: Dr. Maarten Uijt de Haag

Ohio University School of Electrical Engineering and Computer Engineering

Abstract This paper describes the methodology and algorithms used in an implementation of a downward-looking Airborne Laser Scanner (ALS)-based terrain and feature navigation system and integrity monitor. Using a high accuracy and high resolution ALS sensor, the described integrity monitor can first separate features from the terrain and then use the extracted feature data to detect and observe systematic and blunder errors in a terrain feature database, and determine aircraft position. The feature integrity monitor is different from previous research performed at Ohio University in that it extracts specific features, such as buildings, roads, and towers, and performs a consistency check between these objects and a stored feature database, and has the ability to use these features to aid in navigation system performance. To isolate features from the surrounding environment a four-part building extraction algorithm is used. Once the high-frequency building shapes are extracted, they can be compared to the onboard feature database to determine changes and errors. The four-part building extraction algorithm described in this paper has the following characteristics: it works on non-uniformly spaced point-cloud data, it is designed for integrity monitoring rather than complete scene reconstruction, and the automatic feature extraction is not dependent on (but may use if available) a-priori feature shape information. This paper outlines the four-part building extraction algorithm and provides insight into its operation by applying the algorithms to ALS data collected on NASA’s DC-8 Airborne Laboratory over Reno, Nevada in 2003. Introduction The use of Airborne Laser Scanner (ALS) technology to produce very dense and accurate maps of the terrain and the terrain features in LIght Detection And Ranging (LIDAR) mapping systems has become very common in the Geographic Information Systems (GIS) community. Terrain and feature databases created by these systems typically have measurement accuracies on the order of a decimeter and have horizontal measurement resolutions a couple of meters or better. These high accuracy and high density characteristics have made the use of ALS data of interest to the navigation community. Feature Extraction The algorithm for separating feature and terrain data from ALS point cloud data is presented in this section. When referring to the ALS point cloud data which contains both the terrain and the features the data will be called the Digital Surface Model (DSM). The feature extraction techniques described are similar to techniques used in the GIS community for feature identification, but have slightly different constraints. One difference is that rather than placing the focus on extracting features for complete scene recreation, the method focuses on the extraction of a feature set that is to be compared against a precompiled feature database. A second difference is the operational criterion of real-time operation. In the described algorithm, the ALS point cloud data are first separated into terrain and feature data. The terrain data, which should be relatively free of large gradients, are then used in a downward-looking terrain database integrity monitor. The terrain data without features is referred to as the Digital Terrain Model (DTM). The remaining feature data is examined to determine if any known features, such as buildings, exist. If these known features are found, their sharp gradients can be used to enhance the aircraft’s calculated position. The building extraction method has four major steps: the separation of the terrain from features, feature region growth, planar roof-face determination, and edge/corner detection.

236

Page 259: nasa / ohio space grant consortium 2005-2006 annual student

Terrain/Feature Separation The process presented in this paper is based on a method described in [1] where a DTM is generated and then subtracted from the original airborne laser scanner point cloud data, the DSM. The result is a classification of our data points into two sets: points belonging to terrain, or points that belong to features such as buildings or vegetation. Illustrated in Figure 1, the first step in the feature extraction and identification is the separation of the features from the terrain. To separate terrain from non-terrain points the method initially generates a reasonable approximation of the terrain from the acquired ALS data. The DTM is created by using the iterative process, described in [1], where a linear approximation of the terrain surface is generated and each point is assigned a weight based on a function of its residual distance from the generated surface approximation. After a few iterations points are completely separated into either the feature set or the terrain set.

Figure 1. DTM (left) vs. DSM (right).

Region Growth After the point cloud is segmented into the terrain and a feature sets, the feature sets can further be processed in order to extract individual buildings. Currently, a region growth process is used to determine which points in the feature set belong to separate connected structures by searching for groups of points which are “connected” in the original point cloud data. Points are defined as connected if they are contained in adjacent triangles that are formed from a Delaunay triangulation on the point cloud data. To start the region growth, an initial data point is selected from a group of points in the feature point set and used as the seed for an individual building candidate. The triangle list is then inspected and all triangles that contain the initial point as a vertex are identified. All the vertices of the triangles containing the seed point are then included in the building candidate set. After the growth process divides all non-terrain points into a set of unique building candidates, an initial building mask is created to remove undesired features. This building mask is based on applying a surface-area threshold process. This surface area threshold is currently successful in removing small buildings and small patches of vegetation that are not of interest. Roof Facet Detection A typical roof facet can be represented by a plane with a particular slope based on the gable. Thus, to find the roof facet, only those points are considered that lie in a particular plane. An algorithm to find the plane these points lie in is described in [2]. This algorithm creates a 3-dimentional “cluster-space” where the three axes are the following three parameters in the equation of a plane: the intercept value, d, the slope in the x direction, sx, and the slope in the y directions sy. The equation of the plane is defined as:

dYsXsZ yx +∗+∗= (1a)

237

Page 260: nasa / ohio space grant consortium 2005-2006 annual student

In our implementation, the cluster-space is formed by placing each candidate point for a roof facet into a particular bin in the cluster-space. This was done by varying the sx and sy parameters and placing the point in a bin according to its calculation of d. The bin size of d is set based on the expected measurement noise in the z direction (intercept direction). Given Z, X, and Y equation (1a) can be rewritten to yield an expression for d:

YsXsZd yx ∗−∗−= (1b)

The bin which has the largest number of points represents a set of points which lie on the same plane. These points are grouped and classified as a roof facet. To remove possible outliers, which can occur when non-facet points lie in the roof facet plane, a region growth is conducted on the points in the bin to select the connected points. Edge Extraction Once an accurate roof facet model is constructed for each building candidate the algorithm extracts building features, specifically edges from which it identifies corners. The extraction process first identifies the edges of the detected roof facets. QHull convex hull algorithm [3] is used to find these edge points by returning the points that define the convex hull of the roof facet. The extraction of individual edges is based on the assumption that an edge can be found by searching for two consecutive hull points with the largest distance between them. Points along the same edge are added using information obtained from the above detected line. Once a facet edge is identified, these points are removed from the search pool and the process is repeated. Unlike the algorithm found in [2], assumptions on the shape of a building feature are not made. This is not only useful in buildings with non-rectangular linear shapes but also when a building appears near the end of the field of view of the set of ALS. Conclusion When implementing an ALS-based downward looking terrain and feature database integrity monitor, the separation of features containing large surface gradients from the underlying terrain is one of the key elements in the exploitation of the information found in ALS generated point cloud data. As was shown in [4], sole comparison of the vertical components of a real-time generated ALS point cloud with a LIDAR generated terrain database can lead to large, somewhat misleading errors in the presence of even small horizontal errors. The separation of features from the terrain in an ALS generated point cloud allows for processing of the terrain data separate from the feature data. Terrain data can then be processed using the downward-looking terrain database integrity monitor, and identification can be done on the feature data allowing for identification of edges and corners. From the detection of the edges and corners, measurement metrics for the horizontal and horizontal-orientation can be formed. References [1] Kraus K., N. Pfeifer, 1998, “Determination of terrain models in wooded areas with airborne laser

scanner data,” ISPRS Journal of Photogrammetry & Remote Sensing, vol. 53, pp. 193-203. [2] Maas, H., G. Vosselman, 1999, “Two algorithms for extracting building models from raw laser

altimetry data,” ISPRS Journal of Photogrammetry & Remote Sensing, vol. 54, pp. 153-163. [3] Qhull webpage, “Qhull”, www.qhull.org, August 11, 2005. [4] Campbell, J., A. Vadlamani, M. Uijt de Haag, S. D. Young, October 12-16, 2003, “The Application

of LiDAR to Synthetic Vision System Integrity,” Proceeding of the 22nd IEEE/AIAA Digital Avionics Systems Conference (DASC), Indianapolis, IN.

[5] Campbell, J. L., M. Uijt de Haag, F. van Graas, January 24-26, 2005, “Terrain Referenced Precision Approach Guidance,” Proceedings of the ION National Technical Meeting 2005, San Diego, CA, pp. 643-653.

238

Page 261: nasa / ohio space grant consortium 2005-2006 annual student

Carbon Nanofiber Composites for Reverse Osmosis

Student Researcher: Elisa M. Vogel

Advisor: Dr. Glenn Lipscomb

The University of Toledo Department of Chemical and Environmental Engineering

Abstract Current commercial membranes used for reverse osmosis water filtration are made primarily from either polyamide or cellulose acetate materials. Polyamide membranes can withstand operation over a broader temperature range, are more resistant to biological attack, and have better salt rejection characteristics than cellulose acetate membranes. However, a more durable and selective membrane would be advantageous. Making a mixed matrix membrane consisting of carbon nanofibers dispersed in a polyamide matrix could increase the strength, selectivity and overall transport properties of the membrane. To produce membranes with uniform properties, the fiber must be evenly distributed throughout the polymer matrix. This will require good mixing of the carbon and polymer. Additionally, good bonding between the two phases is essential to prevent fluid bypassing through the interphase region and ensure stress transfer between the phases when under load. Project Objectives The objective of the proposed work is to develop a composite film that will have better mechanical and transport properties than either phase alone. Various polymers including polycarbonate and Nylon 6,6 are being tested to determine which polymer forms the most cohesive film with the carbon nanofiber powder. Methodology and Results A polymer is a high molecular weight macromolecule comprised of repeating monomer units and is usually carbon based. Carbon nanofibers are carbon particles possessing linear dimensions on the order of a few nanometers. These two materials possess vastly different properties and offer the potential to create improved filtration membranes by creating a composite that possesses the best properties of each. However, these differences in properties can frustrate attempts to mix them. The ultimate aim of this project is to disperse the carbon uniformly throughout the polymer matrix. The first attempt to mix the materials will utilize simple solution mixing. If this proves ineffective, the next step will be to alter the surface properties of the nanofibers to make the fibers appear more polymer-like, thereby facilitating mixing. To begin, a few different polymers were selected and each tested to determine which possesses the greatest affinity for the carbon nanofiber. Polycarbonate was the first polymer tested. The carbon nanofiber powder was added to a 10 weight percent solution of polycarbonate in methylene chloride. The solution was mixed vigorously in a sonic bath and a film was cast with the mixture. The solvent had a tendency to evaporate too quickly, resulting in a white film instead of the desired clear film. It was also noted that the carbon did not disperse evenly; instead the powder seemed to stick to itself forming relatively large masses irregularly distributed throughout the matrix. Since solution mixing was not successful, an alkanethiol was added to the mixture to improve mixing. The presence of the alkanethiol should cause the powder to appear more polymer-like by reacting with surface hydroxyl groups, thus improving the attraction between the polymer and nanofiber. Two different alkanethiols, with varying chain lengths, were used: butanethiol and dodecanethiol. A similar mixing procedure was tested, but it was found that the solutions would partially solidify if allowed to sit, even if sealed, overnight. The same problems with uneven dispersion were observed with the thiol-modified mixture. Matrimid®, a polyimide, was also tested following a similar procedure with even less desirable results.

239

Page 262: nasa / ohio space grant consortium 2005-2006 annual student

Currently, Nylon 6,6, a polyamide, which is more like the materials used commercially as reverse osmosis membranes, is being tested. The first step will be to synthesize nylon. Next, films will be cast with the material. Lastly, the carbon nanofiber powder will be introduced into the system and films will again be cast. If necessary, the use of alkanethiols will be considered to improve the strength of the interactions between the nylon and the carbon. Once films have been cast, mechanical and transport properties of the films will be measured. The tensile modulus of each film will be determined using dynamic mechanical spectroscopy. The water permeability of each film will be determined by placing a sample in a holder and maintaining a water pressure difference across it. The hydraulic permeability is equal to the water flux across the membrane divided by the hydraulic pressure difference. The salt rejection of each sample will be determined as well by contacting one side of the film with a saline solution under pressure and determining the salt concentration of the permeate that passes across the membrane. The rejection coefficient is equal to the difference between the feed and permeate salt concentrations divided by the feed concentration. The goal is to increase water permeability while maintaining or increasing salt rejection. The anticipation is that the addition of carbon nanofibers will create well-defined channels for mass transport and therefore improve performance. Significance and Interpretation of Results An analysis of the results has not yet been completed, as well-mixed composite films haven’t been cast and consequently cannot be tested. Currently, work is being done to achieve better mixing between the carbon and the polymer, thus no data has been taken. As each film is cast it is visually inspected and it has been found that a sufficient degree of mixing has not yet been achieved. Mechanical tests will be performed once the desired degree of mixing is observed. Acknowledgments I would like to extend my sincerest thanks to my adviser Dr. Glenn Lipscomb for his continued support and guidance throughout this project. References 1. Strong, A. Brent. Plastics: Materials and Processing. 2nd ed. Upper Saddle River: Prentice-Hall, Inc.,

2000. 2. Lee, T. R., and R. Colorado. "Thiol-Based Self-Assembled Monolayers: Formation and

Organization." Encyclopedia of Materials: Science and Technology (2001): 9332-9344.

240

Page 263: nasa / ohio space grant consortium 2005-2006 annual student

High School Anatomy and Physiology - Students Investigate Human Physiology in Space to Increase Their Understanding of the Human Cardiovascular System

Student Researcher: Kimberly J. Vogt

Advisor: Dr. Connie Bowman

University of Dayton

Department of Teacher Education

Abstract Critical thinking is an extremely important skill to foster in any academic classroom. Because teachers are required to teach large amounts of information in any academic school year, it may be difficult to devote time to helping students develop critical thinking skills. This two-day lesson is designed to allow students to investigate the function of the human cardiovascular system on Earth and in space. By comparing and contrasting human physiology on Earth and in space, the students will expand their critical thinking skills and gain a deeper understanding of the concepts involved. Anatomy and Physiology is a difficult academic subject. Because of the breadth of information, terminology, and concepts involved in the subject, it is difficult to encourage students to learn Anatomy and Physiology beyond the memorization level. The objective of this human physiology in space lesson is to extend student thinking and encourage them to apply, analyze, and synthesize the information they have learned about the cardiovascular system to determine the effects space travel will have on the function of the system. The students will simulate conditions such as Puffy-head, Bird-leg syndrome, a condition that occurs during space travel because of changes in gravity. The investigation will help the students understand the concepts at a deeper level. The impact of the lesson will be evaluated by the assessment of each student’s laboratory report. The students’ data charts and their answers to the post-lab questions will show that the objectives had indeed been met. Learning Objectives 1. The students will be able to construct their own data charts to record the data they collect during the laboratory investigation. (Grade Ten, Science, Scientific Inquiry, Indicator # 2) 2. The students will be able to analyze their data and their knowledge of human physiology to draw conclusions about the functioning of the cardiovascular system on Earth and in space. (Grade Ten, Science, Scientific Inquiry, Indicator # 4) 3. The students will be able to analyze their data and their knowledge of human physiology to predict the effect of different gravitational environments on the function of the cardiovascular system. (Grade Ten, Science, Scientific Inquiry, Indicator #4) Ohio Academic Content Standards Subject: Science Standard: Scientific Inquiry Participate in and apply the processes of scientific investigation to create models and to design, conduct, evaluate and communicate the results of these investigations. Grade/Benchmark: Grade 10 Area: Doing Scientific Inquiry Grade Level Indicators: 2: Present scientific findings using clear language, accurate data, appropriate graphs, tables, maps, and available technology. 4: Draw conclusions from inquiries based on scientific knowledge and principles, the use of logic and evidence (data) from investigations.

241

Page 264: nasa / ohio space grant consortium 2005-2006 annual student

Student Grouping The students will work in groups of two for this laboratory investigation. However, they will be required to construct their own individual data table and answer the post-lab questions in their own words. They will submit individual lab reports at the end of the activity. Students will work as partners to measure their heart rates and leg diameters. Individual lab reports will require each student to analyze the data and draw conclusions based on their individual knowledge. Thus, each student will be challenged to think critically about the investigation and the differences between cardiovascular function on Earth and in space. Methods/Instructional Strategies I will be facilitating cooperative learning partnerships during this lesson. The laboratory is highly student involved, and the students will be responsible for their own learning (as they are everyday in my class). The students will be engaged by the active learning investigation of Puffy-Head, Bird-Leg Syndrome because they will be able to actually simulate the effects of the syndrome. They will be collecting data and organizing their data into a table. The students will analyze their data to answer post-laboratory questions. All of these tasks require high levels of student engagement. However, I will be circulating around the classroom to ask questions and check that the students understand what they are doing. I have chosen a laboratory activity for this lesson because research has shown that the incorporation of laboratory activities in science classrooms increases student engagement and motivation to learn (Freedman, 1997). The cooperative learning partnerships will encourage students to work together and help each other with the laboratory activity. Small group work allows me to circulate the classroom and talk to students on an individual level. In this setting, they may be more likely to answer my questions, or ask me questions, if they are too shy to volunteer during class. Activities 1. Assess students’ prior knowledge about the differences between Earth’s environment and a space environment. -ask questions about these environments, brainstorm a list of differences as a class 2. Review students’ prior knowledge of the human cardiovascular system. -ask students to summarize the flow of blood through the human body 3. Introduce Puffy-Head, Bird-Leg syndrome. -show pictures of astronauts who have experienced this syndrome -explain that the astronauts are experiencing a fluid shift—more blood flowing to their head and

less to their lower extremities. 4. Announce today’s activity—the students’ opportunity to investigate Puffy-Head, Bird-Leg Syndrome and analyze its causes and effects. 5. Distribute laboratory procedures and assign partners. -direct students to use the materials already pre-set on their desks -explain and demonstrate each position -standing position is standing up straight -head-down tilt position is lying on the floor with feet elevated on a chair -explain expectations for the laboratory report—data table and post-lab questions 6. Students should complete steps 1-4 of the laboratory procedure. -monitor student progress as they work -may require more than one class-period. One student may be the subject on the first day, and the

other student may be the subject on the second day. 7. Discuss results. Students should complete post-lab questions. Collect Lab Reports.

242

Page 265: nasa / ohio space grant consortium 2005-2006 annual student

Resources Required Each student group will need the following:

• Stop-watch or watch with a second hand • Ruler • Piece of string (or a tape measure, if available) • Chair

Lesson Implementation Results The students were extremely engaged and interested in this activity. They seemed to enjoy the change of pace and the opportunity to actively study a fascinating topic. Space travel and Puffy-Head, Bird-Leg Syndrome piqued their interest immediately. Many were skeptical at first about the reality of the syndrome, but the activity allowed them to experience it first-hand. Many students observed that their head felt heavier when they elevated their feet. Measurements showed that legs did decrease in diameter in many cases. The students successfully reported their numerical results in data charts and analyzed their data to answer post-lab questions. The objectives of the lesson were definitely met. Description and Results of an Assessment Element The students were assessed on their organization of their data and their answers to the post-laboratory questions. The students were able to use their data and knowledge to make comparisons about cardiovascular function on Earth and in space, as well as on other planets. The students were able to predict that astronauts would experience a greater fluid shift on the Moon than on Mars because the Moon has less gravity. They were able to explain the effect of the head-down tilt position on their heart rate, stroke volume, and cardiac output. They successfully collected and analyzed their data. Critique and Conclusion of Project This laboratory activity was a fun, fascinating, and interesting way to stimulate critical thinking about human physiology. It is easy for both teachers and students of physiology to get bogged down by the immense amount of memorization required for the course. In the midst of cramming all of the terms, names, processes, etc, it is often difficult to foster a genuine deep understanding of the scientific concepts involved in human physiology. This laboratory activity, however, challenged students to apply their knowledge. By comparing conditions on Earth and in space, students analyzed changes that would occur in heart and blood vessel function when the human body is exposed to a new environment. In order to make these comparisons, the students must have a solid understanding of cardiovascular function on Earth; memorization of the process will not suffice. Furthermore, this activity required students to construct their own data chart. To do so, they had to carefully consider the data they would include and think critically about the best way to organize that data. Then they had to analyze their data in order to draw conclusions about the effect of gravity on cardiovascular function. In conclusion, this lesson was an excellent way to review key concepts of cardiovascular function and encourage critical thinking about the process. Acknowledgements: This research would not have been possible without the guidance of Dr. Connie Bowman from the University of Dayton and Mrs. Beth Carstens, Anatomy and Physiology teacher at Centerville High School. References 1. Freedman, Michael P. Relationship among Laboratory Instruction, Attitude toward Science, and

Achievement in Science Knowledge. Journal of Research in Science Teaching, 34 (4), 343-357. 1997.

2. Lujan, Barbara F. and Ronald J. White. Human Physiology in Space: A Curriculum Supplement for Secondary Schools.

243

Page 266: nasa / ohio space grant consortium 2005-2006 annual student

Human Physiology in Space Lab

Activity: “Puffy Head, Bird Leg” Syndrome Materials:

• Ruler • Piece of string or a tape measure • Chair • Watch or clock with a second hand

Procedure: *Note: You must create your own data sheet for this experiment, on a separate sheet of paper. 1. Determine the resting cardiac output of subject in a standing position. a. determine heart rate (pulse) in beats/min b. assume stroke volume is 75 ml/beat c. calculate cardiac output (stroke volume x heart rate) in ml/min and convert to L/min d. repeat for a second trial 2. Determine the circumference of subject’s leg in a standing position a. always measure the same leg and the same part of the leg b. use string/ruler or tape measure to measure leg size on the lower part of the leg (calf) c. repeat for a second trial 3. Determine circumference of subject’s leg in head-down tilt position a. situate subject in head-down tilt position, rest there for 3 minutes b. measure circumference of leg as described in step 3 c. do the next step (cardiac output) then repeat for a second trial. Record the times of the first and second trials. 4. Determine resting cardiac output of subject in head-down tilt position a. follow directions in step 1 except assume stroke volume is 90 ml/beat b. repeat for a second trial c. now go back and do the second leg measurement. Record the time. Observations/Questions *Please answer these questions on a separate sheet of paper. 1. Write down you observations of the changes in the subjects facial characteristics following orientation into head-down tilt position. 2. Write down the subject’s own sensations related to head fullness in the head-down tilt position. 3. What is the main factor responsible for the headward fluid shift that occurs both in space and, using the head-down tilt simulation, on Earth? 4. The Moon has one-sixth the amount of gravity that the Earth has, and Mars has one-third the amount of gravity that the Earth has. On which of the two, the Moon or Mars, would you experience a greater headward shift of fluid compared to your normal condition on Earth? Why? 5. Compare your calculated results of the resting cardiac output for the same subject under conditions of standing upright and under conditions of head-down tilt. Does a head-down tilt orientation affect the total value for cardiac output? With your knowledge about how the head-down tilt orientation affects stroke volume and heart rate, explain your answer. 6. Why is it important not to compare data taken from one subject with data taken from another subject?

244

Page 267: nasa / ohio space grant consortium 2005-2006 annual student

The 7xxx Series Aluminum Alloy for Aircraft Structures

Student Researcher: Kathryn D. Wehrum

Advisor: Jed E. Marquart, Ph.D., P. E.

Ohio Northern University Department of Civil Engineering

Abstract The materials used within the aerospace discipline can be categorized into airframe materials and engine materials. The functions of airframe materials are restrained due to design requirements (including weight, stiffness, strength, wear rates, resistance to corrosion, and cost). Aluminum alloys are highly regarded airframe materials because of their low density, excellent strength to weight ratio, resistance to oxidation and corrosion, and their high levels of thermal and electrical conductivity. The main weakness of aluminum alloys are low tensile strengths. However, through mechanical processes, aluminum alloys are developed to provide strength in resistance to fatigue and other wear rates. The fatigue lives of aluminum alloys are determined by crack initiation. This cracking is increased due to microporosity (pore spaces) that occurs during DC casting. Microporosity develops in long freezing range 7xxx Aluminum alloys. Studies on reducing microporosity and improving the strength to resist fatigue of the 7xxx aluminum alloys will be made using an optical microscope, tensile testing machine, and hardness testers. Results and comparisons from these tests will be recorded for numerous 7xxx series aluminum alloys. Conclusions will be made in order to make recommendations for the best 7xxx Aluminum alloy for the aerospace discipline. Project Objectives The object of this project was to become familiar with the properties of the 7xxx series aluminum alloy used as an aerospace material. Understanding the fundamental characteristics of aluminum and aluminum alloys would help to draw conclusions as to why the 7xxx series is the best selection for the aerospace discipline. Once this objective was clear, analysis was made on improving the 7xxx design. Methodology Used An in depth research process was performed to understand the basic functions of aluminum alloys. All types of alloys were investigated but it was clear that the 7xxx was the “aerospace alloy”. Emphasis was then placed on this series of aluminum alloys to understand what characteristics make it the ideal choice for the aerospace industry, in particular aircraft structures. The next steps of this procedure are to obtain data and results of 7xxx with improved welding techniques. Results Obtained Aluminum alloys are combinations of other metals that provide more strength and stiffness than pure Aluminum. These alloys make up to more than 70 percent of the structure of a modern airliner. The most common and frequently used aluminum alloys are the 2xxx and 7xxx series. These alloys are located throughout the aircraft structure including the body, wings, and tails. The 7xxx series aluminum alloy is often referred to as the aerospace alloy. This is because of its properties that make it favorable for aerospace design. These properties include good heat treating characteristics and most importantly the greatest strength of all commercial heat treated alloys. This metal is alloyed primarily with Zinc (Zinc makes up 5.5 – 6.5 percent by weight of the alloy) and additionally magnesium (2.0 – 3.5 percent by weight), Copper (1.5 – 2.5 percent by weight) and Chromium (less then one percent by weight). Because of the presence of copper, the 7xxx series alloys provide little corrosion resistance. Most of these aluminum alloys undergo heat treatment which provides thermal energy to dissolve the alloys fully. The microstructural features are then locked into the aluminum. This provides

245

Page 268: nasa / ohio space grant consortium 2005-2006 annual student

better strength. Research has shown that the best 7xxx series material overall was 7075. It is among the highest strength alloys and is beneficial at high stressed areas of the aircraft. Design for the structure of the aircraft calls for aluminum alloys with increased strength and reduced density. Corrosion must be minimized and additional weight, due to joining sheets of aluminum alloys with joints and rivets must be reduced. Advances in assembly methods are the solutions to a better aircraft design of the future. Recent studies have shown a possible application of friction stir welding for joining the previously unweldable 7xxx series. This would allow for a decrease in total weight of the aircraft and prove to be a cost effective solution. Significance and Interpretation of Results, Figures and Charts Once testing and results are completed, results and conclusions can be made to create improved light-weight designs. Acknowledgments The author would like to extend her gratitude to her project advisor, Jed E. Marquart, Ph.D., P. E.

References 1. “Aluminum 7075-T73; 7075-T735x”. Matweb.com. 2006. 04 Jan 06.

<http://www.matweb.com/search/SpecificMaterial.asp?bassnum=MA7075T73.> 2. “Aluminum Metallurgy”. SecoWarwick. 2005. 20 Jan 06.

<http://www.secowarwick.com/pressrel/articles/aluminummetallurgy.htm>. 3. “Lecture 17: Heat treatable aluminum alloys”. Mmat.ubc.ca. 10 April 06.

<http://www.mmat.ubc.ca/courses/mmat380/lectures/2004/Lecture%2017-Heat-treatable%20Aluminum%20Alloys(Complete).pdf>.

4. Silverman, David C. “Tutorial on Classification Numbers of Various Alloy Families”. Argentum Solutions, Inc. 04 Jan 06. <http//www.argentumsolutions.com/tutorials/alloy_tutorialpg5.html>.

246

Page 269: nasa / ohio space grant consortium 2005-2006 annual student

Passive Radar Coverage Analysis Using Matlab

Student Researcher: Brian J. Wirick

Advisor: Dr. Brian Rigling

Wright State University Electrical Engineering Department

Abstract Passive Radar technology is an innovative approach to air surveillance and aircraft detection. Our atmosphere is filled with various sorts of electromagnetic waves propagating through the air. These radio waves are meant to go directly from the transmitter to the receiver for their specific application, however incidentally these waves will bounce off of solid objects in their path. Passive radar systems use this concept of reflections from already existing broadcast signals to identify targets such as airplanes, boats, and other objects that may be of interest. Research on passive radar systems has been gaining popularity in the recent years because of the growth of commercial transmitters and the many benefits a passive system has to offer. Examples of everyday transmissions that could be used for this technology include AM and FM radio broadcasts, analog television signals, cell phone towers, HDTV transmissions, HAM radio repeaters, and basically any other signal that is being transmitted. There are already a few passive radar systems that are being tested today. Lockheed Martin has a system known as Silent Sentry which uses commercial FM radio stations to passively detect and track airborne targets in real-time [3]. A UK-based company, Roke Manor Research, has a system called Celldar utilizing existing cell phone tower transmissions for target illumination [4]. There are many benefits offered by a passive radar system for military and commercial use. The FAA is largely transponder-dependent and does not rely on radar as much. However, not all aircraft are equipped with transponders and low-flying aircraft may be able to slip under conventional radar coverage. Implementing a passive radar system would be one way to heighten the FAA’s capabilities. The primary advantage of passive radar systems is that the transmitted signals needed are already present and offer a vast coverage of area. This would eliminate the need to construct the powerful and expensive transmission component of a conventional radar system. All that is needed are modified matching receivers, which are significantly cheaper than transmitters. A second advantage of this system’s passive nature is that there would be no specific radar frequency signature for the enemy to detect. This means that aircraft would have no way of knowing if they were being observed or not, such as enemy aircraft in hostile areas. The nature of needing only the receiver component makes this system easily deployable [3]. Because this is such a mobile system, a method of calculating coverage area for optimal receiver placement in domestic and foreign lands could be of great help. Project Objectives For this project I will compute coverage maps based on hypothetical receiver placements for potential areas of surveillance by exploiting publicly available DTED maps. I will create an analysis tool to load these maps into MATLAB and accept receiver locations as defined by the user. The output will be maps showing theoretical receiver coverage, as well as the percent of the area that could be monitored. The coverage provided by the receiver of the input terrain will be calculated in terms of line-of-sight and Signal-to-Noise Ratio. This tool could then be used to find the best transmitter/receiver arrangement for optimal coverage giving the best probability of detection. Methodology The first step in this project was finding publicly available elevations maps known as “geospatial data.” I found there was a wide range of data types in this area to choose from, with varying resolutions and formats. Examples of gridded geospatial data sets available are Earth Topography (ETOPO), Global

247

Page 270: nasa / ohio space grant consortium 2005-2006 annual student

Land Coverage Characteristics (GLCC), Digital Terrain Elevation Data (DTED), Digital Elevation Maps (DEM), Global Land One-km Base Elevation project (GLOBE), National Elevation Dataset (NED) as well as others [1]. These all contain elevation data in a gridded format representing specific portions of the earth. The format I chose to use was DTED as it has moderate world wide coverage in a format that is Matlab-friendly. The data is accessable via the internet from the National Geospatial-Intelligence Agency at http://geoengine.nga.mil/geospatial/SW_TOOLS/NIMAMUSE/webinter/rast_roam.html. There are several levels of DTED maps with varying resolution. The levels go from 0 to 5, with zero being the lowest resolution and what is available to the public online. Level 0 DTED maps are sampled every 30 arc seconds, which is the equivalent of about one kilometer resolution. This allows a larger area of land to be analyzed quicker, however, may not be as accurate. A list of other DTED levels and resolution can be seen in Table 1. Once selecting a mapping format to use, it had to be read into Matlab where it could be displayed and analyzed. I wrote a script file that when executed in the Matlab environment allows the user to interactively select a DTED file to be opened. Once read in, I plotted the elevation data using a 3D surface plot, and adjusted shading, colors and lighting to best resemble earth terrain. The way DTED maps are arranged, they will display negative elevation corresponding to the ocean floor. However, for this application that is not needed and could skew data by saying land deep in an ocean ridge is not covered, when in reality targets would only be at sea level at those locations. To correct this I set any elevation below zero equal to sea level so that it would resemble the ocean surface. I then added a legend relating the map colors to elevation data in meters. An example plot of terrain just East of St. Louis, Missouri, can be seen in Figure 1. The next step is selecting a location for a theoretical receiver placement on the map. I made this possible through two different methods. The user can interactively click a spot on the elevation map, or enter an exact latitude and longitude coordinate from a series of two input windows. These windows display the latitude and longitude boundaries and can be seen in Figure 2. From the user defined receiver location, line-of-sight is then calculated with the receiver adjusted 30 feet off the ground, as if it were in a tower. This is more realistic and allows for greater land coverage. Line of sight is then calculated and entered into a matrix the same size as the elevation data map with 1’s representing line-of-sight, and 0’s representing no visibility. This matrix is overlaid on the 3D elevation map with one color representing line-of-sight coverage, and another color representing no visibility. In addition, this data is laid in a top view 2D contour map as another way of visualizing coverage, which can be seen in Figure 3. The percent of land able to be monitored by a receiver at this location is calculated by totaling the number of 1’s (representing visible land) in the line-of-sight matrix, and divided by the total number of points in the matrix. The resulting percent is then displayed to the user in a message box as seen in Figure 4. The user can then place the receiver in another location and experiment to find a place that would provide sufficient coverage. Results Obtained / Significance This simulation tool provides a 3D view of the input terrain to be analyzed for receiver placement. In addition it displays 2D and 3D theoretical coverage maps for the receiver location along with the percent of terrain that would be seen from that point. These results could help assess unfamiliar terrains to provide locations for optimal radar surveillance. Acknowledgments I would like to thank my advisor, Dr. Brian Rigling, for his support and encouragement in helping me finish this project.

248

Page 271: nasa / ohio space grant consortium 2005-2006 annual student

Figures and Tables

Table 1. DTED levels and resolutions [2].

DTED Level Spacing

Ground Distance Tile Size

1 3 sec 100 m 1 degree 2 1 sec 30 m 1 degree 3 0.333 sec 10 m 5 minutes 4 0.111 sec 3 m 1 minutes 5 0.037 sec 1 m 30 seconds

Figure 1. Example of terrain elevation plot.

Figure 2. Input windows for latitude and longitude of receiver placement.

249

Page 272: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Contour plot showing area capable of being seen by passive radar receiver.

Figure 4. Window displaying percent of terrain visible from radar receiver location. References [1] http://www.mathworks.com/support/tech-notes/2100/2101.html [2] http://www.fas.org/irp/program/core/dted.htm [3] http://www.dtic.mil/ndia/jaws/sentry.pdf [4] http://www.roke.co.uk/sensors/stealth/celldar.asp

250

Page 273: nasa / ohio space grant consortium 2005-2006 annual student

Inquiry-based and Discovery Learning in Mathematics

Student Researcher: J. Rose Wright

Advisors: Rebecca Krakowski, University of Dayton and Ann Farrell, Wright State University

Wright State University

School of Graduate Studies

Abstract As we have seen in the past decade or so, standardized tests are setting a precedent that we better acknowledge and yield to, or die trying. Though these tests were designed to encourage teachers to teach the concepts the state and nation laid out, students are getting lost somewhere in the mix. Research has shown that students today do not learn the same way we did years ago. With the influx of technology and multimedia, rote learning to them is what an abacus would have been to us (after calculators were the norm). Yet we force students to quietly sit still while we lecture and encourage them to copy problems. In a world where games are interactive, we must compete with significant collateral. Inquiry-based and Discovery learning has shown tremendous potential. I have decided to incorporate these types of learning environments into my classroom. Many of the resources provided by NASA as well as other resources will be used. Since my students will have more than enough problems on their minds, this type of learning will keep their minds occupied while participating in the lessons. For those students who are behind grade level for math, this will even the playing ground since it will be a new experience for everyone. My hopes are that by actively engaging in each of the lessons, students will have a concrete picture in their minds to relate to the abstract ideas found in the math books. Since they will be writing (to explain answers) this allows for investigation of conceptual knowledge. Working in groups to complete the lessons will hopefully be a valuable experience for students as well. Too many times see students relying on the teacher for wrong or right and not wanting to say anything for fear of being wrong. With these activities, students will have to build trust in classmates as well as build self-confidence in themselves to explain and defend their decisions. Correctly memorizing an algorithm does not ensure a student has learned the information. But, if a student understands the concept that underlies the algorithm, even if the student can’t remember the algorithm, the student will have that knowledge of how to obtain the answer and why. Since students will be working in groups, it is likely for them to receive more individual attention. In the future, because of the in-depth knowledge I’m hoping students will obtain, I hypothesize that future [OGT] test scores will increase. Project Objective The objective of the project was (and still is) to see how hands-on learning affects overall student achievement and retention of ‘learned’ information. While we want to see improved OGT test results in Math, the final outcome of course is to eventually see students able to confidently attack math problems with a deep understand of content information. Background Students in an inner city Midwestern alternative high school district were used. The school was chosen because it serves at-risk students. Most of them are drop outs or irregular to the point of truancy. These students have severe behavioral problems and are not responsive to authoritative direction. The socio-economic level of the majority of these students falls below the national poverty level. They are given an opportunity to earn a full credit in one quarter and the credits earned allow them to still obtain a high school diploma. It is a small school serving less than 100 students; with at most a 1:15 teacher student ratio. This school, for most of the students, is their last chance at success. While these students were enrolled in a high school, the math comprehension for over 80% of the students was no higher than 7th grade.

251

Page 274: nasa / ohio space grant consortium 2005-2006 annual student

Methods Discovery and Inquiry-based learning were used over 75% of the time in class. There were self-developed labs that students were to complete during the quarter. Students worked in groups and I walked around answering and asking questions to deepen understanding. Students would collect their own data, process the data, and make their own analysis. Technology (Internet and graphing calculators) will also be used in the labs to expose students to modern trends in mathematics learning. There was also a two-week workshop that I devoted to the basics (including addition, subtraction, multiplication, and division of negative numbers and fractions.) During this workshop, students worked with base ten blocks, color tiles, dice, pennies, and even became a human number line. They were to obtain an explicit understanding of the basic information needed to succeed in a high school math classroom. For all activities pre- and post-tests were used to note the level of understanding before and after the labs. Students would be able to view both tests and assess themselves as well. Results There was a two-week session where, after just one week (7.5 hours) of hands-on learning, there were significant differences notable in students’ work. Though no students aced the post-test, every student increased their score by at least 50% and there was evidence of understanding. Even students who scored in the 40-60 percent range on the post-test, those same students pre-test scores were in the 0-20 percent range. Due to the amount of writing required in labs, the student’s comprehension is clearly evident. Students are used to proving (by a response and supporting evidence) how they obtained their results. I know that if they can explain what they did, and why they did it, that they understand it. Students are less apprehensive about asking questions or commenting on things in class since there is no right or wrong answer. Another thing that was noted had to do with the explanations I required on every question for full credit. Knowing explicit understanding needed to be shown, caused students to come to class and pay attention. I also found I was able to make sure students understood the information by traveling around the room since I did not have to lecture. By changing variables for different groups, students were able to compare findings and analyze results. Significance of Findings Since these results were only from two quarters of work, I am convinced that as I work out the quirks, there will be an increase in results. I am in the process of developing new labs so that I won’t have to do any board work. The labs though long, cover a large dosage of information for students. This helps in tying all of the math together for them. Also, having students work in groups allows them to trust in their own as well as their classmate’s views instead of always looking to me for the ‘correct’ answer. It is also noteworthy to note that when students understand something, they can explain it to others. I find that the information covered in labs allows students to have something concrete and real to relate the math to. They are then able to explain it not only to me, but to other students who may have missed something. Now my tests come back with writing and pictures all over them. My students know that have to convince me they understand the material. Future Projects Since OGT tests were just administered in March, the results are not back yet. It would be nice to research how students who learn by ‘discovery’ fare better than the ‘normal’ classroom students.

252

Page 275: nasa / ohio space grant consortium 2005-2006 annual student

Load Balancing Network Streams

Student Researcher: Chad O. Yoshikawa

Advisor: Dr. Kenneth A. Berman

University of Cincinnati ECECS Department

Abstract In this research, we are attempting to build a distributed filesystem capable of handing thousands of widely-distributed, network-challenged clients. These are clients that cannot communicate directly with one another, thus, each client must send file requests via a third party or waypoint. For example, most clients on the Internet today are behind a firewall or a network-address-translation (NAT) box, or a combination of both. Firewalls can prevent incoming connections while NAT boxes hide clients’ true network address – in either case files cannot be sent to clients behind either of these devices. Our system, called Galaxy, provides a solution to this problem by re-routing data intended for challenged clients to a set of third-party waypoint servers. Data at the waypoint machines is then fetched by clients at some later time. An analogous situation occurs in the real world when a post-office keeps mail which was temporarily undeliverable. The construction of the client filesystem interface, the waypoint system software, and the load-balancing algorithm which assigns clients to servers is the work of our ongoing research. Introduction Current trends indicate that volunteer and peer-to-peer (P2P) resource sharing is becoming an increasingly popular modality in computer systems research. Volunteer applications are premised on the fact that there exists a large abundance of Internet-accessible idle resources. CPU cycles are typically the commodity in volunteer-computing applications. To wit, 500,000 volunteer computers are currently used en masse by the SETI@Home [7] project in a global effort to scan radio signals for signs of extra terrestrial intelligence. Many other volunteer computing projects similar to this are underway, see, for example, the WorldGrid effort [6], Google's Google Compute project [3], Folding@Home [2], and Distributed.net's code-breaking applications [1]. Volunteer computer disk space has been utilized both directly and indirectly by many P2P filesharing and distributed filesystem projects, including the first P2P filesharing application, Napster [5], and more recent incarnations including KaZaa [4], Gnutella [21], and others. In these applications, recently downloaded content is shared by default with the rest of the community, providing an indirect 'incentive' to share files[12]. At the same time, P2P storage projects such as OceanStore [14], PAST [23], Samsara [10], and others have explored using well-connected network nodes to provide a reliable storage service to the (not so well connected) masses. BitTorrent [9], a recently-popular P2P file sharing application, makes use of another type of resource, network bandwidth, available from peers in order to increase the quality of service delivered to sharers of very large (multi-gigabyte) files. Despite the success of these projects, there exists a major limitation evident in these systems that remains to be solved: the vast majority of problems being tackled using volunteer computing resources have been constrained by the fact that a majority of volunteers have limited networking ability, e.g., volunteers often find themselves placed behind firewalls or network-address translation (NAT) devices which render them unable to accept incoming network connections [13]. This 'one-way networking' problem has had profound effects on the scope and scale in all areas of P2P research. For example, in file sharing applications, users become drains on the system since they are unable to contribute (upload) files to community [12]. Volunteer computing applications are also affected. Applications written on top of CPU harvesting services are limited to a single type: embarrassingly-parallel non-communicating tasks. This, despite the fact that medium-grained (communicating) parallel applications have been successfully run in the wide-area [18, 11]. In essence, resources on the 'private Internet' have been made unavailable. In this report, we offer the Galaxy volunteer-computing architecture as a solution to this problem. Galaxy uses a collection of well-placed public computers to serve as a network indirection service for all private volunteer computers. This service makes

253

Page 276: nasa / ohio space grant consortium 2005-2006 annual student

communication to private nodes possible, and enables new types of volunteer applications to be built including: global filesystems, parallel scientific-computing applications, and desktop-to-desktop network measurement tools. As proof of the soundness of the architecture, we propose to design and build a volunteer-fueled distributed filesystem, the Galaxy Filesystem, which adheres to our proposed architecture. Project Objectives It is our objective that the design of the Galaxy Filesystem will advance computer systems research in areas including load-balancing and so-called 'churn resilience'. Load-balancing techniques will be developed in order to optimally map volunteer computers to the 'best' coordinator machines; an unequal balance of load would result in poor performance or even system malfunction and overload. 'Churn resilience', a term coined by the researchers of the Bamboo project[20], is a quality that is required especially of a filesystem built using volunteer computers. A filesystem will need to be resilient to high churn (joining and leaving) rates of volunteer computing resources. It should be noted that churn resilience is markedly different than fault tolerance. Fault tolerant programs are not necessarily able to survive in an environment where many machines are joining and leaving the network. Fault tolerance usually implies that the running program operates correctly even when a subset k of the total n processors fail. Churn resilience, on the other hand, implies that a program will complete successfully and efficiently in the face of many users joining and leaving the system. Methodology Used For the Galaxy Filesystem prototype, we have chosen the PlanetLab[8] to serve as our stable network indirection service. The reason for this is two-fold: (1) PlanetLab has good networking and geographic characteristics for our system, and (2) we are familiar with the PlanetLab having used it to support our research for the past year. The Galaxy Filesystem is a distributed service, with server software running on the set of stable PlanetLab nodes and client software (volunteer applications) executing on the set of volunteer nodes. The servers are responsible for monitoring application health, maintaining a current list of volunteers, and providing volunteer-to-volunteer messaging capabilities useful for providing disk block transfers. The clients are assumed to be Windows-based machines, since most of the computers in our testbed (the ECECS department) run the Windows operating system. The client software is a Windows name-space extension (NSE), which extends the user’s view of the filesystem to include files available from remote computers via the Galaxy infrastructure. Andrew Benchmarks The Andrew benchmarks [17] are recognized as a touchstone for filesystem performance. This benchmark simulates a software development cycle, including the creation of a set of directories, creation of a set of files, examination of files, and finally a compiling and linking phase using these files. For our Galaxy filesystem client, we have used only two phases of the four-phase Andrew benchmark – the file creation phase (data writing) and file scanning and copying phase (data reading). Results Obtained In our previous research [26, 25], we have built a messaging layer on top of the Pastry[22] key-based routing layer. This layer, 'Distributed Hash Queues' (DHQ), provides a naming and network indirection service that runs on the PlanetLab testbed. These queues are assigned 160-bit names and they supp ort operations including 'enqueue' and 'dequeue'. Through consistent hashing, queues are roughly load balanced across the PlanetLab no des and a queue is locatable via its unique name from any no de in the network. By creating a request and reply queue for each volunteer in the system, we can thus build basic volunteer-to-volunteer messaging. In fact, in our research, we have built a system called DynamicWeb which provides HTTP request and HTTP reply messaging on top of the DHQ abstraction.

254

Page 277: nasa / ohio space grant consortium 2005-2006 annual student

Significance and Interpretation of Results Our preliminary results show that the DHQ system, the substrate up on which the Galaxy Filesystem will be built, provides bandwidth that scales with the number of servers. In Figure 1, we see the bandwidth measurements of the system over the life of a particular two-minute test [25]. In this test, there are 32 clients applying a steady load of HTTP requests via a dynamic set of DHQ servers. Each client continuously requests files of size 16k, using the servers as an indirection service; each client in actuality is requesting a document from its own web server using the DHQ server as a proxy. (The clients are statically assigned to servers in order to perfectly balance load.) The servers are dual-processor AMD Athlon 1800+ machines with 2GB of RAM each, while the clients are dual Pentium III-450MHz machines with 1GB of RAM. While the number of clients stays fixed at 32, the number of DHQ servers grows from 1 to 15 servers, doubling for each test. (Only 15, not 16, maximum servers were available for the test.) When DHQ uses only a single server, although there is a heavy load, there is no network communication that is necessary between DHQ servers (since there is only 1 server). However, as the number of servers grows to 2, now the two DHQ servers must collaborate and communicate with each other in order to return consistent results back to the clients. For example, with two DHQ servers, each HTTP request/response may go through an additional network hop. Furthermore, as the size of the DHQ server farm grows, the underlying Pastry key-based lookup layer must do additional work in order to find the physical location of queues. Thus, we have two competing forces in the DHQ system which gives us the results we see in the results graph. First, the more servers we add, the better the distribution of the heavy client load which is placed on our system. However, the more servers we add, the higher the network cost is of maintaining the DHQ system. In the graph, we can see these competing forces at play. Although the throughput (HTTP requests/second) is increasing as more servers are added, the increase is not linear. We expect that with smarter placement of queues and caching of queue locations, we will be able to increase performance significantly. The additional figures show the Galaxy filesystem client performance results (Figures 2 and 3). The main result is that the Galaxy client is competitive with existing, native filesystem clients. This indicates that we can provide an interface to remote Galaxy files that is in line with the performance that the user has come to expect on the Windows platform. With the client software and messaging software complete, our current research involves load-balancing of clients to servers in order to achieve maximum throughput and fairness to clients. We are currently investigating ‘oblivious routing’ techniques which can be used to load-balance clients to servers without requiring a centralized server.

255

Page 278: nasa / ohio space grant consortium 2005-2006 annual student

Figures/Charts

Figure 1. Throughput of the DHQ system (in KB/s) as servers are added for a fixed client load. Bandwidth ramp up and ramp down effects are visible at the start and the end of the test.

Figure 2. A breakdown of the times in the Galaxy filesystem during an ISO-file write operation.

256

Page 279: nasa / ohio space grant consortium 2005-2006 annual student

Figure 3. Benchmark showing the performance of the Galaxy NFS filesystem extension. Galaxy, with 8 threads, outperforms the Microsoft SFU version 2 NFS client and is competitive with the NFSv3 client. Acknowledgements I would like to thank, first and foremost, the Ohio Space Grant Consortium for providing the funding that enables me to perform this research. I would also like to thank my advisors at the University of Cincinnati: Dr. Kenneth A. Berman, Dr. Fred Annexstein, and Dr. Urmila Ghia. Also, I thank Dr. Gary Slater, the OSGC liaison at the University of Cincinnati, for all of his help and guidance. References [1] Distributed.net. http://www.distributed.net, 2004. [2] Folding@Home. http://folding.stanford.edu, 2004. [3] Google Compute. http://toolbar.go ogle.com/dc/offerdc.html, 2004. [4] Kazaa. http://kazaa.com/, 2004. [5] Napster. http://napster.com/, 2004. [6] World Community Grid. http://www.worldcommunitygrid.org/, 2004. [7] D. P. Anderson, J. Cobb, E. Korpela, M. Lebofsky, and D. Werthimer. Seti@home: An experiment

in public-resource computing. Commun. ACM, 45(11):56-61, 2002. [8] B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson, M. Wawrzoniak, and M. Bowman. Planetlab:

an overlay testbed for broad-coverage services. SIGCOMM Comput. Commun. Rev., 33(3):3-12, 2003.

[9] B. Cohen. Incentives build robustness in bittorrent. In Proceedings of the First Workshop on the Economics of Peer-to-Peer Systems, 2003.

[10] L. P. Cox and B. D. Noble. Samsara: honor among thieves in peer-to-peer storage. In ACM Symposium on Operating Systems Principles, pages 120 - 132, 2003.

[11] T. A. DeFanti, I. Foster, M. E. Papka, R. Stevens, and T. Kuhfuss. Overview of the I-WAY: Wide-area visual supercomputing. The International Journal of Supercomputer Applications and High Performance Computing, 10(2/3):123-131, Summer/Fall 1996.

[12] M. Feldman, C. Papadimitriou, J. Chuang, and I. Stoica. Free-riding and whitewashing in peer-to-peer systems. In PINS '04: Proceedings of the ACM SIGCOMM workshop on Practice and theory of incentives in networked systems, pages 228-236. ACM Press, 2004.

257

Page 280: nasa / ohio space grant consortium 2005-2006 annual student

[13] Y. hua Chu, A. Ganjam, T. E. Ng, S. G. Rao, K. Sripanidkulchai, J. Zhan, and H. Zhang. Early experience with an internet broadcast system based on overlay multicast. Technical Report CMU-CS-03-214, CMU, December 2003.

[14] J. Kubiatowicz, D. Bindel, Y. Chen, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao. Oceanstore: An architecture for global-scale persistent storage. In Proceedings of ACM ASPLOS. ACM, November 2000.

[15] S. Marisarla, V. Narayanan, U. Ghia, and K. Ghia. Structural analysis of a joined-wing configuration using equivalent box-wing and reinforced-shell model. In Presented at the 28th AIAA Mini-Symposium, 2003.

[16] P. Mutnuri, H. Ayyalasomayajula, U. Ghia, and K. Ghia. Analysis of separated flow in a low-pressure-turbine linear cascade using multi-block structured grid. In 41st Aerospace Sciences and Meeting and Exhibit, 2003.

[17] J. K. Ousterhout. Why aren't operating systems getting faster as fast as hardware? In USENIX Summer, pages 247-256, 1990.

[18] M. Reich, T. Beisel, H. Berger, K. Bidmon, E. Gabriel, R. Keller, and D. Rantzau. Clustering t3es for metacomputing applications. In Proceedings of the Cray User Group Conference, 1998.

[19] S. Rhea, P. Eaton, D. Geels, H. Weathersp o on, B. Zhao, and J. Kubiatowicz. Pond: The oceanstore prototype. In Proceedings of USENIX File and Storage Technologies (FAST), 2003.

[20] S. Rhea, D. Geels, T. Roscoe, and J. Kubiatowicz. Handling churn in a DHT. Technical Report CSD-03-1299, UCB, December 2003.

[21] M. Ripeanu, I. Foster, and A. Iamnitchi. Mapping the gnutella network: Properties of large-scale peer-to-peer systems and implications for system design. IEEE Internet Computing Journal, 6(1), 2002.

[22] A. Rowstron and P. Druschel. Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems. In IFIP/ACM International Conference on Distributed Systems Platforms (Middleware), pages 329-350, November 2001.

[23] A. Rowstron and P. Druschel. Storage management and caching in past, a large-scale, persistent peer-to-peer storage utility. In Proceedings of the eighteenth ACM symposium on Operating systems principles, pages 188- 201. ACM Press, 2001.

[24] R. Siva ji, U. Ghia, K. N. Ghia, and H. Thornburg. Aero dynamic analysis of the joined-wing configuration of a hale aircraft. In 41st Aerospace Science Meeting and Exhibit, 2003.

[25] C. Yoshikawa, B. Chun, and A. Vahdat. Distributed hash queues: Architecture and design, 2004. Third International Workshop on Agents and Peer-to-Peer Computing (AP2PC).

[26] C. Yoshikawa, B. Chun, and A. Vahdat. The Lonely NATed No de. In Proceedings of the 11th ACM SIGOPS European Workshop, Leuven, Belgium, September 2004.

258