Our community blogs
A Journal of Applied Mechanics and Mathematics by DrD, # 24
© Machinery Dynamics Research, 2016
Torsional Stiffness of a Shaft -- Part III
The discussion in previous parts of this series has focused on stiffness (or compliance) estimates for various shaft geometries. There has been nothing said yet about joining parts together, although most readers will readily agree that integral (single piece) shaft assemblies are very rare in practice. It is time to discuss joining multiple components together to form a shaft system.
The list of possible coupling types is almost endless, so this article will simply focus on a few of the more common types to illustrate the thought processes.
Keyway or Spline
Before considering actually joining to a second member, if the connection is to be made by means of a key or a spline, it is appropriate to look at the way in which the keyway (or spline) itself increases compliance in the section where there is no torque transfer.
Where a key is used, the keyway is usually cut significantly longer than the key itself. This results in a section of the shaft that effectively has reduced diameter. Experience has shown that this can be treated adequately by considering it to be a uniform solid shaft of diameter Deff as shown in Fig. 1.
Similarly, where a spline is used, it is not difficult to see that the ribs that form the spline teeth carry no significant shear. Thus the part of the shaft that is splined is also properly modeled as a uniform solid shaft with diameter Deff as shown in Fig. 1.
- Read more...
- 0 comments
Recent EntriesLatest Entry
I have learned to take most news reports with a pound of salt. The VW diesel scandal may deserve at least a pinch of salt. That is my take from the attached article, “VW Dieselgate…”
Most who have taken exams to receive their medical, law, or engineering licenses would likely not pass the exam on any given day as normally administered. I am sure I could solve a sufficient number of problems to get a passing grade on a PE exam but not in the 8-hour window.
In the real world, our clients or employer would like us to work quickly and arrive at a correct solution. Few will expect you to work at the breakneck speed, required in exams, to solve problems where health and safety are involved. Demonstrating your understanding of the concepts is required but not sufficient as it may be under exam conditions. Testing, whether of machines or people, is artificial. Until you have attempted to test real world conditions, you cannot appreciate the difficulty.
Part of the artificial or “cheat mode” associated with testing people is to have specific books at arm’s length and to have recently used these books and specific charts and tables. Some hire tutors, attend special review classes, use study guides that cover the expected topics. Most of this will not be at the professional’s side after the exam. Much of the exam material will not be addressed in our day-to-day work. Most will find their job focuses on a particular product or technology. They may find themselves in administrative roles not addressed on the exams they took to receive the required certification in their field.
We accept and understand the lack of correlation between exams and real world conditions. So what is the point of testing? Some benchmark is required and this is the best we know - no one suggests cheating.
If we wish to test in real world conditions, whose world will we choose? Sticking with just automobiles within the USA, the environment for a car in Florida is very different from a car in Wyoming. As I address in my blog “Engineering From Behind the Bushes” 6/3/2015, http://www.jagengrg.com/blog correlation between test and real world is never 100% and may not exist at all. In that example, there was some real cheating on top of the known lack of correlation.
I cannot help but see a parallel between the inherent flaws in testing methods used for people and those used for machines. We establish a goal and we design to that goal. Is this criminal or a solution to an otherwise impossible set of goals? Before we crucify VW et al., would it not be better to educate the lawmakers about the impossibility of testing to real world conditions? That would be a step towards creating standards, which achieve the desired goals. What do you think?
Recent EntriesLatest Entry
Challenges Before Engineers
The unintended outcomes of past engineering solutions to the problems have adversely affected environment. Environmental sustainability, health, reducing our vulnerability adding to the joy of living is essential for humanity to flourish. This forms challenges not only for engineers but also to others.The challenges relate to energy and the environment, health and medical sciences, education and information technology, infrastructure and security. Through creativity and commitment these engineering challenges can be realistically met.
- Make solar energy affordable.
- Provide energy from nuclear fusion.
- Develop carbon sequestration methods.
- Manage the nitrogen cycle.
- Provide access to clean water.
- Restore and improve urban infrastructure.
- Advance health informatics.
- Engineer better medicines.
- Reverse-engineer the brain.
- Prevent nuclear terror.
- Secure cyberspace.
- Enhance virtual reality.
- Advance personalized learning.
- Engineer the tools for scientific discovery.
Meeting these challenges would be game changing. Success with any of them could dramatically improve life for everyone. However, the future solutions might depend on lowering the cost of doing things and use of less energy overall on life cycle basis
Engineering Challenges for THE 21st Century India
The increasing dependence on technology of our standard of living requires a more technologically trained work-force with right kind of skills and attitude. India has around 560 million people under the age of 25 years and about half of them are in the age group 10-19 years. By the year 2020 India will have 160 million people in the age group 20-24 years, much more than even China. For India next 30-40 years are useful, dynamic and productive at the time when rest of the world is ageing. India may provide workforce for the world provided they possess suitable skills on global standards. Training in technology at different levels assumes priority. Conventional methods of training may not be adequate and new technology assisted approach has to be adopted. In R&D, India is to move a long distance still. It is not only the question of the number of persons involved in the activities, but basically the mindsets of a large number of researchers and the administrators require drastic change. Students engaged in Masters in science subjects, engineering graduates and post graduate courses and also the MBAs is the group of potential innovators on which the government, the academic faculty and the heads of Corporate India must focus. Unfortunately as on today, the standard is pretty poor. Still a large number of people will remain unskilled and so the development of employment generating technologies /services will remain a priority for India. A vast majority of our workers of tomorrow will be ill educated and a substantial number functionally illiterate. More than 60% of those under fifteen years old enrolled in almost dysfunctional schools and more than 80% of our young engineers of today who graduate but receive little education, will be the senior professionals of tomorrow. Engineering challenges will have to include development of modern technologies that work efficiently with a large labour component, at least in foreseeable future. Low-cost and huge mid career skill up-gradation programmes would have to be put in place. Innovative use of technology in Vocational Education and Training will be a priority.
The technological challenges before India at present include the following:
- In energy area - alternative/advanced electricity generation technologies need to be made viable. There is the move to hydrogen economy. There is need of expanding energy availability with access globally, while minimizing the adverse environmental and social impacts.
- In medicine, there will continue to be new medical testing and treatment equipment, such as prosthesis integration with the human neural system and medical application of nanotechnology to limit invasive treatments.
- In the environmental area, there is the challenge of limiting or reversing the adverse impact due to human existence at comfortable level and yet in an economically viable way. In addition to globalization, economical sound environmental protection, and rapid technological advancement, challenges pertain to
- National security needs,
- An aging infrastructure
- Aging population
Role of Mechanical Engineers
Mechanical Engineering is concerned with the design, development, research, evaluation, manufacture, installation, testing, operation, maintenance and management of machines, mechanical and mechatronic systems, automated systems and robotic devices, heat transfer processes, thermodynamic and combustion systems, fluid and thermal energy systems, materials and materials handling systems, manufacturing equipment and process plant. Mechanical engineers are critical to technologies that serve people and are widely represented in both the traditional and alternative energy industries. They possess the knowledge and skills needed to design new energy sources and make existing energy sources cleaner and improve the efficiency of current and emerging technologies. They can be at the forefront of developing new technology for environmental remediation, farming and food production, housing, transportation, safety, security, healthcare and water resources. They can create sustainable solutions that meet the basic needs and improve quality of life for all people around the world.
The role of Mechanical engineers in future development will be to
- Develop sustainably through new technologies and techniques, and respond to the global environmental pressures brought about by economic growth;
- Be at the forefront of implementing a system design approach across large and small-scale systems;
- Engage in international collaboration around our critical knowledge and competencies;
- Work in the emerging Bio-Nano technologies to provide solutions in such diverse fields as healthcare, energy, water management, the environment and agriculture management, and
- Create affordable engineering solutions for the poor and deprived. And there are aspirations of the Third World moving toward a First World standard of living in a sustainable, environmentally sensitive manner. Mechanical engineering will evolve and collaborate as a global profession in the near future through a shared vision to develop engineering solutions that foster a cleaner, healthier, safer and sustainable world. This will need to create greater public awareness of the essential contributions of engineering to quality of life consistent with a sustainable world. Other critical choices include focused efforts to improve:
- Advocacy to influence political decision making on issues related to science, engineering and technology;
- Multi-disciplined and systems engineering approaches to multi-scale systems;
- Partnerships among academic, industry and government to expand research and development and develop the next generation of engineers,
- Lifelong learning for globally competent engineers and engineering leaders
What will happen if we condense the exhaust gas of coal gasification after passing through the gas Generator and then feed it to water treatment plant and then drain that water to the ground or reuse it. Can this be possible? No exhaust gas or reduced exhaust gas issue to environment.
- Read more...
- 0 comments
Recent EntriesLatest Entry
Today we have introduced the android app version 1 - the very basic app which will show you the latest discussion on your mobile phone.
You can download the same from
This is a very basic app to give a start .. we look forward for senior members , developers to associate to make it one of the best app for mechanical engineering profession
We request you to install the app on your phone and ask all your mechanical engineeirng friends to install the same..
Crystalline semiconductors such as silicon can catch photons and convert their energy into electron flows. New research shows that a little stretching could give one of silicon's lesser-known cousins its own place in the sun.Nature loves crystals. Salt, snowflakes and quartz are three examples of crystals – materials characterized by the lattice-like arrangement of their atoms and molecules.
Industry loves crystals, too. Electronics are based on a special family of crystals known as semiconductors, most famously silicon.
To make semiconductors useful, engineers must tweak their crystalline lattice in subtle ways to start and stop the flow of electrons.
Semiconductor engineers must know precisely how much energy it takes to move electrons in a crystal lattice.
This energy measure is the band gap. Semiconductor materials such as silicon, gallium arsenide and germanium each have a band gap unique to their crystalline lattice. This energy measure helps determine which material is best for which electronic task.
Now an interdisciplinary team at Stanford has made a semiconductor crystal with a variable band gap. Among other potential uses, this variable semiconductor could lead to solar cells that absorb more energy from the sun by being sensitive to a broader spectrum of light.
A colorized image, enlarged 100,000 times, shows an ultrathin layer of molybdenum disulfide stretched over the peaks and valleys of part of an electronic device. Just 3 atoms thick, this semiconductor material is stretched in ways to enhance its electronic potential to catch solar energy.
The material itself is not new. Molybdenum disulfide, or MoS2, is a rocky crystal, like quartz, that is refined for use as a catalyst and a lubricant.
But in Nature Communications, Stanford mechanical engineer Xiaolin Zheng and physicist Hari Manoharan proved that MoS2 has some useful and unique electronic properties that derive from how this crystal forms its lattice.
Molybdenum disulfide is what scientists call a monolayer: A molybdenum atom links to two sulfurs in a triangular lattice that repeats sideways like a sheet of paper. The rock found in nature consists of many such monolayers stacked like a ream of paper. Each MoS2 monolayer has semiconductor potential.
"From a mechanical engineering standpoint, monolayer MoS2 is fascinating because its lattice can be greatly stretched without breaking," said Zheng, an associate professor.
By stretching the lattice, the Stanford researchers were able to shift the atoms in the monolayer. Those shifts changed the energy required to move electrons. Stretching the monolayer made MoS2 something new to science and potentially useful in electronics: an artificial crystal with a variable band gap.
"With a single, atomically thin semiconductor material we can get a wide range of band gaps," Manoharan said. "We think this will have broad ramifications in sensing, solar power and other electronics."
Scientists have been fascinated with monolayers since the Nobel Prize-winning discovery of graphene, a lattice made from a single layer of carbon atoms laid flat like a sheet of paper.
In 2012, nuclear and materials scientists at Massachusetts Institute of Technology devised a theory that involved the semiconductor potential of monolayer MoS2. With any semiconductor, engineers must tweak its lattice in some way to switch electron flows on and off. With silicon, the tweak involves introducing slight chemical impurities into the lattice.
In their simulation, the MIT researchers tweaked MoS2 by stretching its lattice. Using virtual pins, they poked a monolayer to create nanoscopic funnels, stretching the lattice and, theoretically, altering MoS2's band gap.
Band gap measures how much energy it takes to move an electron. The simulation suggested the funnel would strain the lattice the most at the point of the pin, creating a variety of band gaps from the bottom to the top of the monolayer.
The MIT researchers theorized that the funnel would be a great solar energy collector, capturing more sunlight across a wide swath of energy frequencies.
When Stanford postdoctoral scholar Hong Li joined the Department of Mechanical Engineering in 2013, he brought this idea to Zheng. She led the Stanford team that ended up proving all of this by literally standing the MIT theory on its head.
Instead of poking down with imaginary pins, the Stanford team stretched the MoS2 lattice by thrusting up from below. They did this – for real rather than in simulation – by creating an artificial landscape of hills and valleys underneath the monolayer.
They created this artificial landscape on a silicon chip, a material they chose not for its electronic properties, but because engineers know how to sculpt it in exquisite detail. They etched hills and valleys onto the silicon. Then they bathed their nanoscape with an industrial fluid and laid a monolayer of MoS2 on top.
Evaporation did the rest, pulling the semiconductor lattice down into the valleys and stretching it over the hills.
Alex Contryman, a PhD student in applied physics in Manoharan's lab, used scanning tunneling microscopy to determine the positions of the atoms in this artificial crystal. He also measured the variable band gap that resulted from straining the lattice this way.
The MIT theorists and specialists from Rice University and Texas A&M University contributed to the Nature Communications paper.
Team members believe this experiment sets the stage for further innovation on artificial crystals.
"One of the most exciting things about our process is that is scalable," Zheng said. "From an industrial standpoint, MoS2 is cheap to make."
- Read more...
- 0 comments
Recent EntriesLatest Entry
Atomic Number : 13
Density (20oC) : 2.70 g/cm3
Atomic Weight : 26.98
Melting point : 660o C
Boiling point : 2467o C
Aluminum finds use as a deoxidizer, grain refiner, nitride former and alloying agent in steels. Its ability to scavenge nitrogen led to its widespread use in drawing quality steels, especially for automotive applications. Since aluminum is so often added to high quality steels.
Metallic aluminum is the most common addition agent. It is sold in the form of notch bars, or stick, and as shot, cones, small ingots, chopped wire, “hockey pucks”, briquettes and other convenient forms such as coiled machine fed wire. These standard products are supplied in bulk or packaged in bags or drums. Purity for deoxidation grades is usually over 95%, the major tramp elements being zinc, tin, copper, magnesium, lead and manganese. Coiled aluminum wire is normally made to 99% minimum specification.
Ferroaluminum, a dense and highly efficient aluminum addition, contains 30-40% Al. It is supplied in lump form, 8 in. x 4 in., 5 in. x 2 in., 5 in. x D, and 2 in. x D, and nominal 12 lb. and 25 lb. pigs, packed in drums and pallet boxes.
Aluminum has a weak effect on hardenability (it is never added for this purpose) and, because of its grain refining properties, actually detracts from deep hardening. Heat treatable steels made to fine grain practice require slightly extra alloying to counteract this phenomenon. Aluminum is, however, a ferrite former and promotes graphitization during long-term holding at elevated temperatures. It also enhances creep, probably because of its grain refining property. Aluminum, therefore, should not be used in Cr-Mo or Cr-Mo-V steels specified for boiler or high temperature pressure vessel applications. Perversely, aluminum is otherwise beneficial to such materials since it reduces scaling through the formation of a more tightly adhering oxide film, particularly if chromium is present as well.
Beyond its important functions in deoxidation and grain size control, aluminum has several applications as an alloying agent. Nitriding steels, such as the Nitralloy family, contain up to 1.5% Al to produce a case with hardness as high as 1100 VHN (70 RC). The outer layer of this case must, however, be removed by grinding to prevent spalling in service. The oxidation (scaling) resistance imparted by aluminum is exploited in some stainless steels and various high temperature alloys. Precipitation hardening stainless steels (17/7 PH, 15/7 PH, etc.) make use of aluminum’s ability to form strength-inducing particles of intermetallic compounds. Aluminum is found in many superalloys for the same reason.
Aluminum combines very readily with nitrogen, and this effect has important commercial uses. Aluminum killed deep drawing steels will be nonaging since AlN is extremely stable. Such steels will not exhibit stretcher strains (Lüder’s lines) or a yield point, even after prolonged holding after cold rolling. Aluminum is also added to nitriding steels for its ability to form an extremely hard case.
Aluminum is an important addition to some HSLA steels, and AlN was the first nitride used to control grain size in normalised and heat treated steels. Again, Al removes nitrogen from solution and provides grain refinement. Both of these effects promote high toughness, especially at low temperatures.
Mention should be made of the effect of aluminum on nonmetallic inclusions, since these will always be present in AK steel. Because aluminum is among the strongest deoxidizers known, it can combine with, and partially or totally reduce, any other oxides present in steel. The subject is quite complex and depends not only on aluminum, but also on oxygen, nitrogen, sulfur, manganese, silicon, and calcium contents. For ordinary steels, however, the pattern is generally as follows: unkilled steels will contain oxides of iron, manganese and silicon, to the extent they are present. Steels deoxidized with silicon and aluminum will contain complex inclusions containing silica, alumina and manganese and iron oxides. As aluminum is increased, it gradually replaces silicon in the inclusions, and the principal inclusions in aluminum killed steels will be alumina and iron-manganese aluminates. Calcium-aluminum deoxidized steels will contain calcium aluminates, the composition and properties of which will depend on oxygen content (see Calcium). The residual Al2O3 in a ladle aluminum deoxidized steel will usually be in the range of 0.015-0.020%. This alumina range will be present regardless of the amount of aluminum used for deoxidation. It is assumed that the remaining alumina of iron aluminate is slagged off.
Aluminum also has a profound effect on the structure of sulfide inclusions. The three basic types of sulfides present in steels have been designated as Type I (fine, randomly distributed spheroids, usually oxysulfides), Type II (intergranular chains which are most harmful to mechanical properties) and Type III (large, globular particles with complex, multiphase structures). Incomplete deoxidation with aluminum results in Type I inclusions; complete, but not excessive deoxidation produces Type II inclusions, while excessive aluminum addition leads to the formation of the Type III particles.
High aluminum contents also promote the generation of interdendritic alumina galaxies, which can impair machinability. Aluminum is added in some stainless grades to improve machinability.
Aluminum as alumina in calcium aluminate slags has found extensive use as slag conditioners at LMF stations. These are used to remove sulfur and inclusions, to lower costs of dolomitic lime, fluorspar, aluminum and calcium carbide additions, to protect the refractory lining, and to improve castability. Applications include both aluminum- and silicon-killed steels.
MRP and MRP2 are predecessors of ERP. An effective organization works with a unified database system. This post is intended to explain the need and benefits of such systems.
" MRP II is an integrated information system that synchronize all aspects of the business."
MRP II system co-ordinates:
by adopting a focal production plan and by using one unified database to plan and update activities in all the systems.
MRP can be divided into three parts which are composed of:
Product Planning functions which take place at the top management level
Operations planning handled by staff units
Operations control functions conducted by manufacturing line and staff supervisors
Checkpoints among the three divisions provide feedback regarding
adequacy of overall resources
completeness of resource commitments
quality of performance in carrying out the plans
Advantages of MRP II:
MRP information systems helped managers determine the quantity and timing of raw materials purchases. Information systems that would assist managers with other parts of the manufacturing process, MRP II, followed.
While MRP was primarily concerned with materials, MRP II was concerned with the integration of all aspects of the manufacturing process, including materials, finance and human relations.
MRP is concerned primarily with manufacturing materials while MRP II is concerned with the coordination of the entire manufacturing production, including materials, finance, and human relations.
While MRP allows for the coordination of raw materials purchasing, MRP II facilitates the development of a detailed production schedule that accounts for machine and labor capacity, scheduling the production runs according to the arrival of materials.
It involves developing a production plan from a business plan to specify monthly levels of production for each product line over the next five years. (Long term planning)
Production department is then expected to produce at the committed levels, sales dept to sell at these levels and finance department to assure adequate financial resources to built this product.
Production plan guides the master schedule and gives the weekly quantities of specific products to be built.
If capacity is not adequate, then the schedule or capacity is changed.
Once settled, this MPS is then used in MRP to create material requirement and priority schedules for production.
Then the CRP assures that capacity is available at scheduled time periods.
Execution and control activities ensures that master schedule is met.
Important terms and concepts:
The forecasting function seeks to predict demands in the future. Long-range forecasting is important to determining the capacity, tooling, and personnel requirements. Short-term forecasting converts a long-range forecast of part families to short-term forecasts of individual end items.
Resource planning is the process of determining capacity requirements over the long term. Decisions such as whether to build a new plant or to expand an existing one are part of the capacity planning function.
Aggregate planning is used to determine levels of production, staffing, inventory, overtime, and so on over the long term. For instance, the aggregate planning function will determine whether we build up inventories in anticipation of increased demand (from the forecasting function), "chase" the demand by varying capacity using overtime, or do some combination of both. Optimization techniques such as linear programming are often used to assist the aggregate planning process.
Rough-cut capacity planning (RCCP) is used to provide a quick capacity check of a few critical resources to ensure the feasibility of the master production schedule. Although more detailed than aggregate planning, RCCP is less detailed than capacity requirements planning (CRP), which is another tool for performing capacity checks after the MRP processing.
Capacity requirements planning (CRP) provides a more detailed capacity check.
Long range planning involves three functions: resource planning, aggregate planning, and forecasting. Intermediate includes production planning functions. The plans generated in the long- and intermediate-term planning functions are implemented in the short-term control.
You would want MRP 2 if you want the following:
1) You want the right materials landing on the right dock with the right quantities at the right time.
2) You want your receiving, storing, assembling and shipping of product to accurately flow.
3) You want to efficiently handle the movement of materials between multiple warehouses and destinations.
4) You want to be able manage high-volume vs low-volume materials differently.
5) You want to accurately fulfill orders in increased volume
Eg: Company is in the industrial goods wholesale distribution business.
Company has larger warehouses in China and in the India.
Company has 10 commercial outlets in the India and in Canada.
Each Outlet stocks high-volume products
Each warehouse aggregates product from around the world.
Company takes customer orders over the web, via customer service and walk-in outlet traffic.
Each warehouse fulfills orders from all sources.
The MRP would help operations and accounting manage material coordination around the world to ensure (1) efficiency and (2) profitability. It accomplishes these goals by providing insight into predictive purchasing, insight into material availability, and accountability of order execution.
Another important concept is material costing. MRP helps provide insight into accurate material costing (product costs, freight, duties, taxes, handling, etc...). Accurate material costing provides insight into product and customer profitability.
Benefits of MRP II in engineering, finance and costing
Better control of inventories
Productive relationship with suppliers
Improved design control
Better quality and quality control
Reduced working capital for inventory
Improved cash flow through quicker deliveries
Accurate inventory records
Recent EntriesLatest Entry
Cr : Improves corrosion resistance and abrasion resistance
Cu : Improves corrosion resistance
Ni : Improves fracture toughness and machinability
Co& Mo : Melting point and servicing temperature
W & V : High temperature strength and hardness
S : Machinability
Mn : Hardenability
Ti : Hardenability and wear resistance
Al : Toughness,acts as deoxidant
Si : Hardenability and formability
Mg : Machinability
Recent EntriesLatest Entry
Differences between Welding, Soldering and Brazing
Welding, Soldering and Brazing are the metal joining process. Each type of joining process has its own significance. Type of joining process to be used for joining two parts depends on many factors. In this article I have covered the differences between the joining processes welding, soldering and brazing.
1 Welding joints are strongest joints used to bear the load. Strength of the welded portion of joint is usually more than the strength of base metal. Soldering joints are weakest joints out of three. Not meant to bear the load. Use to make electrical contacts generally. Brazing are weaker than welding joints but stronger than soldering joints. This can be used to bear the load up to some extent. 2 Temperature required is 3800 degree Centigrade in Welding joints. Temperature requirement is up to 450 degree Centigrade in Soldering joints. Temperature may go to 600 degree Centigrade in Brazing joints. 3 Work piece to be joined need to be heated till their melting point. Heating of the work pieces is not required Work pieces are heated but below their melting point. 4 Mechanical properties of base metal may change at the joint due to heating and cooling. No change in mechanical properties after joining. May change in mechanical properties of joint but it is almost negligible. 5 Heat cost is involved and high skill level is required. Cost involved and skill requirements are very low. Cost involved and sill required are in between others two. 6 Heat treatment is generally required to eliminate undesirable effects of welding. No heat treatment is required. No heat treatment is required after brazing. 7 No preheating of workpiece is required before welding as it is carried out at high temperature. Preheating of workpieces before soldering is good for making good quality joint. Preheating is desirable to make strong joint as brazing is carried out at relatively low temperature.
Did you know that approximately 75% of the total manufacturing costs are already committed at the Conceptual Design phase?
Committed Manufacturing Costs by product design stage
This means that product design optimisation during the conceptual design phase can optimise on 75% of the committed product manufacturing costs. If you start optimising after the end of the conceptual design phase then you can only optimise on the remaining 25% of the committed product manufacturing costs. Therefore the most effective and beneficial optimisation approach starts as early as possible within the product design process.
Being able to predict product maximum variation using minimum and maximum worse case values within the conceptual design phase, and to identify and fine tune the main contributors, will dramatically decrease the expected product costs as well as increase the overall product quality. Knowing the main contributors to the maximum product variation will also help you to use larger tolerances for low impact contributors which will decrease the product costs even further.
Applying optimisation at the Product Conceptual Design Phase most likely will result in the following benefits for you:
- Acceleration of product’s time-to-market
- Reduction of associated costs for design changes
- Increase of product quality and robustness
- Analysis and correction of potential failures and associated risks as early as possible
- Identify and assess risks during conceptual product design
All the best,
- Read more...
- 0 comments
Recent EntriesLatest Entry
Fatigue failures occur when a structural member is subjected to fluctuating stresses or strain due to the action of repeated loading of varying or constant magnitude for a period (time). Failure of the member will occur at a stress below it tensile strength. The mechanics of the failure will depend on whether the material is considered brittle (sudden fracture) or ductile (gradual fracture).
Engineers undertake fatigue life calculations to design against fatigue failure, although absolute fatigue life is near impossible the fatique life calculated by available methods allows for very good prediction of fatique life that enables successful engineering design.
There are three main method used to calculate fatique life. These methods are stress life method, strain life method and crack propagation method. An engineer must determine which method is best for the particular physical problem pose to the design at hand because each method has assumptions which must truly represent the physics of the project problem. In addition, design philosophies need to be taken into account, when choosing fatigue life calculation methods
Stress life method or S-N method is used when the cycle of the stress acting on the structure is high (HCL) > 10^3 cycles and the fatique life is required in the elastic range of the material.
Fatigue life can vary greatly for small changes in stress or strain levels therefore fatigue life calculation requires close attention to be paid to the stress and strain calculation process and the stress and strain magnitude obtained.
Depending on the problem and material for which fatigue life is required, physical test or finite element analysis may be performed to determine the stress and strain level of the material. FEA is a mature technology and offers many benefit to engineering process hence it is commonly used alone or combined with testing (note that testing can be an expensive venture). Testing is sometimes used to validate FEA analysis result while FEA is used to reduce testing cost. When there is reason to believe that either the FEA is correct by analytical method or from historical test result, testing should be ignored.
After testing or FEA has been undertaken, the fatigue life is predicted base on S-N curve of the material in the stress – life approach.
Note that fatigue failure is affected by stress concentration, corrosion, temperature, overload, metallurgical structure, residual stress and combined stress.
KINETIC ENERGY RECOVERY SYSTEM
A kinetic energy recovery system (often known simply as KERS) is an automotive system for recovering a moving vehicle's kinetic energy under braking. The recovered energy is stored in a reservoir (for example a flywheel or high voltage batteries) for later use under acceleration. Formula One has stated that they support responsible solutions to the world's environmental challenges, and the FIA allowed the use of 60 kW (82 PS; 80 bhp) KERS in the regulations for the 2009 Formula One seasonTeams began testing systems in 2008: energy can either be stored as mechanical energy (as in a flywheel) or as electrical energy (as in a battery or supercapacitor). Kimi Räikkönen took the lead of the 2009 Belgian Grand Prix with a KERS-aided overtake and subsequently won the race. With the introduction of KERS in the 2009 season, only four teams used it at some point in the season: Ferrari, Renault, BMW and McLaren. Eventually, during the season, Renault and BMW stopped using the system. Vodafone McLaren Mercedes became the first team to win a F1 GP using a KERS equipped car when Lewis Hamilton won the Hungarian Grand Prix on July 26, 2009. Their second KERS equipped car finished fifth. At the following race, Lewis Hamilton became the first driver to take pole position with a KERS car, his team mate, Heikki Kovalainen qualifying second. This was also the first instance of an all KERS front row. On August 30, 2009, Kimi Räikkönen won the Belgian Grand Prix with his KERS equipped Ferrari. It was the first time that KERS contributed directly to a race victory, with second placed Giancarlo Fisichella claiming "Actually, I was quicker than Kimi. He only took me because of KERS at the beginning"
for more visit....
Free vibrations :
• Free vibration takes place when a system oscillates under the action of forces inherent in the system itself due to initial disturbance, and when the externally applied forces are absent.
• The system under free vibration will vibrate at one or more of its natural frequencies, which areproperties of the dynamical system, established by its mass and stiffness distribution.
Forced vibrations :
- The vibration that takes place under the excitation of external forces is called forced vibration.
- If excitation is harmonic, the system is forced to vibrate at excitation frequency . If the frequency of excitation coincide with one of the natural frequencies of the system, a condition of resonance is encountered and dangerously large oscillations may result, which results in failure of major structures, i.e., bridges, buildings, or airplane wings etc.
- Thus calculation of natural frequencies is of major importance in the study of vibrations.
- Because of friction & other resistances vibrating systems are subjected to damping to some degree due to dissipation of energy.
- Damping has very little effect on natural frequency of the system, and hence the calculations for natural frequencies are generally made on the basis of no damping.
- Damping is of great importance in limiting the amplitude of oscillation at resonance.
The Design and Working
The design included two parts.
1) The Hovercraft
2)The Drum seeder
A hovercraft is a vehicle that hovers just above the ground, or over snow or water, by a cushion of air. Also known as air cushion vehicle, it is a craft capable of travelling over land, water or ice and other surfaces both at speed, and when stationary. It operates by creating a cushion of high pressure air between the hull of the vessel and the surface below. Typically this cushion is contained between a flexible skirt. Hovercrafts are hybrid vessels operated by a pilot as an aircraft rather than a captain as a marine vessel. They typically hover at heights between 200mm and 600mm above any surface and can operate at speeds above 37km per hour. They can clean gradient up to
20 degree. Locations which are not easily accessible by landed vehicles due to natural phenomena are best suited for hovercrafts.
The hovercraft floats above the ground surface on a cushion of air supplied by the lift fan. The air cushion makes the hovercraft essentially frictionless. Air is blown into the skirt through a hole by the blower .The skirt inflates and the increasing air pressure acts on the base of the hull thereby pushing up (lifting) the unit. Small holes made underneath the skirt prevent it from bursting and provide the cushion of air needed. A little effort on the hovercraft propels it in the direction of the push.
As soon as the assembly floats, a blower incorporated in the thrust engine blows air backwards which provides an equal reaction that causes the vehicle to move forward. Little power is needed as the air cushion has drastically reduced friction. Steering effect is achieved by mounting rudders in the airflow from the blower or propeller. A change in direction of the rudders changes the direction of air flow thereby resulting in a change in direction of the vehicle. This is achieved by connecting wire cables and pulleys to a handle. When the handle is pushed it changes the direction of the rudders.
As discussed above,The hovercraft works on air cushion.In the present prototype, air cushion is provided through an electric blower (16000 rpm) blower which pumps air into the skirt thereby inflating the bag skirt. The air pressure thus raises the craft up above the ground. The vehicle has two engines; the rear and the front. A stator fan is attached to the front or lift engine which directs air into the skirt to provide air pressure needed to lift the craft. The propeller attached to the rear or thrust engine develops the thrust needed to propel the craft. The propeller is enclosed by the thrust duct which makes it possible to direct the air. The duct is bell-shaped such that it
increases the velocity of air escaping the duct. The polyester skirt is PVC coated which gives it more strength to sustain the air pressure. It is made air tight. The hull is a platform which sustains the entire weight of the craft. A hole is made on the hull through which air enters the skirt.
After successfully fabricating the hovercraft and testing it on level grounds the next step was to incorporate a drum seeder into the system.A drum seeder is a manual sowing device consisting of a cylindrical drum attached to a central shaft having wheels attached to both sides of the shaft. It was developed with a ease the way of manual labour of sowing using traditional methods but had several demerits compared to what was trying to achieve.In this design ,the idea is to obtain the sowing with the help of rotating drums which have the required slots in the right spacing on its surface, so that the seeds will fall to the field under the force of gravity.
For seeding by a drum seeder, the seeds are soaked in water for 24 hours followed by incubation in gunny bags and straw for 24-48 hours depending upon the weather temperature. The germination length of seeds should not be more than 1-2 mm to avoid any mechanical injury of pregerminated seeds and also to ensure free flow of seeds in the drum seeder. The pregerminated free flowing clean paddy seeds are filled upto 50 percent depth in each seed box by opening the hinge cover. Then the covers are closed
The machine is pulled by one farmer on well leveled puddled field after draining the standing water because standing water more than one cm depth disturbs the seeds sown in straight lines. The pregerminated paddy seeds are sown with the help of drum seeder in well leveled puddled field after draining the standing water. If there is more water than 1 cm then seed will float. Therefore, if there is more standing water in the fields it is better to leave the fields for 1-2 days for settling of puddled soil. Covering the seeds below wet soil surface is not desirable as anaerobic condition gives less germination percent. The posture required is a standing posture throughout the
seeding of pregerminated paddy seeds carried out in the fields.
Ares where improvements can be made from existing device:
- The seeder has to be pulled at uniform speed though the muddy paddy field to get the correct spacing, which is impossible.
- The seeder tends to sink in the mud when fully loaded with seeds. And pulling it through the muddy field requires a lot of energy.
- Footprints of the operator (seeder) were left on the field, in which seeds could end up and result in decay.
- The transplanting device was heavy and will sink in the muddy field
- The process of transplanting was unnecessary and time consuming
In the paddy fields of Kuttanad, which is one of the only two places in the world where agriculture is done below sea level, the soil condition is very soft and the sinkage is very high. The farmers who do the broadcasting sowing now, when they step into the field, they sink into the mud till their knee. Moving through this soft soil is very difficult even for a single farmer, thus pulling or pushing a device through the field is even hectic and hard task. This is the reason why we thought of using a hovercraft to carry the farming mechanism over the field, so that the hovercraft can hover over the ground easily and a remote control to control the farming mechanism means that the
farmer does not even have to step into the field. He can just stay at the side of the field, on the boundary and control the hovercraft and the farming mechanism through the remote.
The benefit of this hovercraft idea is that now, the work done by the farmer is tremendously reduced, all he has to do is control the craft through the remote and make sure that the device is moving in straight lines, and just ensuring that the farming happens in the required manner. This means that the energy expended by the farmer is greatly reduced and thereby the area that can be covered by the farmer in a single day is far more than that can be covered in the conventional method. This is because the farmer needs to take no or very few breaks as he is not required to do any work than controlling the device.
Testing and Results
Testing was conducted by filling the seed container with half its capacity. The device was made to move a distance of 5 meters on a level floor to simulate the seeding process. The device has to move precisely straight so that the correct spacing is achieved between seeds. During testing, the device tend to move side ward (towards left). This was due to the slight inclination of the thrust fan duct, which was inclined. This inclination was removed and tested again, which gave positive results. When the seed drum was rotated by the synchronous motor, which get activated by the forward motion of the hovercraft, seeds were discharged on to the ground with approximately the required spacing. A uniform motion was achieved, and the sowing was able to be done in straight line of 10 meters in less than 2 minutes.,with right spacing in between.
A comparison of time saved; Traditional method Vs HVSD
Conducted a survey that covered the target area. Accordingly the process of traditional farming followed here includes two steps. Firstly the process of broadcasting or sowing of pregerminated seeds which by experience will take about 24 manhours of work to cover one hectare of the field.The second step ; transplantation is done after 18- 20 days ., which takes about 320 manhours/hectare. So the complete process takes about 344 manhours/hectare. On the contrary,the final prototype is estimated to take a meagre 40 manhours/hectare to complete the same task more effectively under ideal conditions.To add, the people of the area has 24x7 access to stable power supply of 240 V.
Will keep you updated more on the project as and when possible. Post your comments and queries below..and likes too:);) Thanks in advance for your supports.Kindly support this initiative by voting and sharing our entry in TechBriefs.For More Infomation and to vote
Is there any process to make helical gear by horizontal milling machine by use of involute gear cutter. I know for this process there are not such finishing but I want to do maximum possible machining by involute gear cutter. What is the setup required of dividing head spindle and any other for tapering tooth space?
- Read more...
- 0 comments
Recently Browsing 0 members
No registered users viewing this page.