Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

Designs, Volume 3, Issue 1 (March 2019) – 19 articles

Cover Story (view full-size image): Modern embedded systems, in a quest to tackle the existing and more demanding requirements, employ heterogeneous processing power such as CPU-GPU platforms. However, besides the increased performance that comes with the new platforms, several challenges are introduced such as an increased complexity of the software-to-hardware allocation. Component-based development is an engineering paradigm that facilitates the software development of embedded systems. In this work, we address the component-to-hardware allocation for CPU-GPU embedded systems. We decrease the allocation complexity of such systems by introducing a 2-layer component-based architecture. The detailed (low-level) system information is abstracted at a high-level layer by compacting connected components (and their properties) into single units that behave as regular components. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 33489 KiB  
Review
Classification and Selection of Cellular Materials in Mechanical Design: Engineering and Biomimetic Approaches
by Dhruv Bhate, Clint A. Penick, Lara A. Ferry and Christine Lee
Designs 2019, 3(1), 19; https://doi.org/10.3390/designs3010019 - 19 Mar 2019
Cited by 117 | Viewed by 16034
Abstract
Recent developments in design and manufacturing have greatly expanded the design space for functional part production by enabling control of structural details at small scales to inform behavior at the whole-structure level. This can be achieved with cellular materials, such as honeycombs, foams [...] Read more.
Recent developments in design and manufacturing have greatly expanded the design space for functional part production by enabling control of structural details at small scales to inform behavior at the whole-structure level. This can be achieved with cellular materials, such as honeycombs, foams and lattices. Designing structures with cellular materials involves answering an important question: What is the optimum unit cell for the application of interest? There is currently no classification framework that describes the spectrum of cellular materials, and no methodology to guide the designer in selecting among the infinite list of possibilities. In this paper, we first review traditional engineering methods currently in use for selecting cellular materials in design. We then develop a classification scheme for the different types of cellular materials, dividing them into three levels of design decisions: tessellation, element type and connectivity. We demonstrate how a biomimetic approach helps a designer make decisions at all three levels. The scope of this paper is limited to the structural domain, but the methodology developed here can be extended to the design of components in thermal, fluid, optical and other areas. A deeper purpose of this paper is to demonstrate how traditional methods in design can be combined with a biomimetic approach. Full article
(This article belongs to the Special Issue Advances in Biologically Inspired Design)
Show Figures

Graphical abstract

21 pages, 881 KiB  
Article
Building Model-Driven Decision Support System in Product Redesign Plan
by Swee Kuik and Li Diong
Designs 2019, 3(1), 18; https://doi.org/10.3390/designs3010018 - 18 Mar 2019
Cited by 3 | Viewed by 4291
Abstract
Product recovery strategy requires a thoughtful consideration of environmental implications of operational processes, undergone by a manufactured product in its entire product lifecycle, from stages of material processing, manufacturing, assembly, transportation, product use, product post-use and end-of-life. At the returns stream from product [...] Read more.
Product recovery strategy requires a thoughtful consideration of environmental implications of operational processes, undergone by a manufactured product in its entire product lifecycle, from stages of material processing, manufacturing, assembly, transportation, product use, product post-use and end-of-life. At the returns stream from product use stage, those parts and/or component assemblies from a used product have several disposition alternatives for recovery, such as direct reuse, remanufacture, recycle or disposal. Due to such complexity of the manufacturing processes in recovery, current decision methodologies focus on the performance measures of cost, time, waste and quality separately. In this article, an integrated decision model for used product returns stream is developed to measure the recovery of utilisation value in the aspects of cost, waste, time, and quality collectively. In addition, we proposed a model-driven decision support system (DSS) that may be useful for manufacturers in making recovery disposition alternatives. A case application was demonstrated with the use of model-driven DSS to measure recovery utilisation value for the used product disposition alternatives. Finally, the future work and contributions of this study are discussed. Full article
Show Figures

Figure 1

20 pages, 4812 KiB  
Article
Determining Simulation Parameters of Prototype Door Hinge for Correlation between Simulation and Experimental Results in United Nations Economic Commission for Europe Regulation No: 11 Tests
by Onur Erol and Hande Güler Özgül
Designs 2019, 3(1), 17; https://doi.org/10.3390/designs3010017 - 15 Mar 2019
Cited by 2 | Viewed by 6785
Abstract
In this study, the simulation parameters of the door hinge were investigated in the Z direction to have a correlation between the experimental test and simulation. Tests and simulations were conducted according to the United Nations Economic Commission for Europe Regulation No: 11. [...] Read more.
In this study, the simulation parameters of the door hinge were investigated in the Z direction to have a correlation between the experimental test and simulation. Tests and simulations were conducted according to the United Nations Economic Commission for Europe Regulation No: 11. The simulation parameters, which are the friction coefficient of contacts, the effect of bush material assignment and effect of production imperfections, were examined respectively by utilizing the implicit solver of Ansys Mechanical Workbench 18 and the force-displacement curves were compared with experimental test results in order to decide the optimal settings of parameters. In a conclusion friction coefficient 0.2, non-linear bush material and realistic geometry model were considered as the optimal parameter settings for correlated Finite Element Model of the hinge. Full article
Show Figures

Figure 1

17 pages, 5242 KiB  
Article
Tetrahedron-Based Porous Scaffold Design for 3D Printing
by Ye Guo, Ke Liu and Zeyun Yu
Designs 2019, 3(1), 16; https://doi.org/10.3390/designs3010016 - 18 Feb 2019
Cited by 9 | Viewed by 5567
Abstract
Tissue repairing has been the ultimate goal of surgery, especially with the emergence of reconstructive medicine. A large amount of research devoted to exploring innovative porous scaffold designs, including homogeneous and inhomogeneous ones, have been presented in the literature. The triply periodic minimal [...] Read more.
Tissue repairing has been the ultimate goal of surgery, especially with the emergence of reconstructive medicine. A large amount of research devoted to exploring innovative porous scaffold designs, including homogeneous and inhomogeneous ones, have been presented in the literature. The triply periodic minimal surface has been a versatile source of biomorphic structure design due to its smooth surface and high interconnectivity. Nonetheless, many 3D models are often rendered in the form of triangular meshes for its efficiency and convenience. The requirement of regular hexahedral meshes then becomes one of limitations of the triply periodic minimal surface method. In this paper, we make a successful attempt to generate microscopic pore structures using tetrahedral implicit surfaces. To replace the conventional Cartesian coordinates, a new coordinates system is built based on the perpendicular distances between a point and the tetrahedral faces to capture the periodicity of a tetrahedral implicit surface. Similarly to the triply periodic minimal surface, a variety of tetrahedral implicit surfaces, including P-, D-, and G-surfaces are defined by combinations of trigonometric functions. We further compare triply periodic minimal surfaces with tetrahedral implicit surfaces in terms of shape, porosity, and mean curvature to discuss the similarities and differences of the two surfaces. An example of femur scaffold construction is provided to demonstrate the detailed process of modeling porous architectures using the tetrahedral implicit surface. Full article
(This article belongs to the Special Issue Design and Applications of Additive Manufacturing and 3D Printing)
Show Figures

Figure 1

30 pages, 5247 KiB  
Article
A full Model-Based Design Environment for the Development of Cyber Physical Systems
by Roberto Manione
Designs 2019, 3(1), 15; https://doi.org/10.3390/designs3010015 - 13 Feb 2019
Viewed by 4524
Abstract
This paper discusses a full model-based design approach in the applicative development of Cyber Physical Systems targeting the fast development of Logic controllers (i.e., the “Cyber” side of a CPS). The proposed modeling language provides a synthesis between various somehow conflicting constraints, such [...] Read more.
This paper discusses a full model-based design approach in the applicative development of Cyber Physical Systems targeting the fast development of Logic controllers (i.e., the “Cyber” side of a CPS). The proposed modeling language provides a synthesis between various somehow conflicting constraints, such as being graphical, easily usable by designers, self-contained with no need for extra information, and to leads to efficient implementation, even in low-end embedded systems. Its main features include easiness to describe parallelism of actions, precise time handling, communication with other systems according to various interfaces and protocols. Taking advantage the modeling easiness deriving from the above features, the language encourages to model whole CPSs, that is their Logical and their Physical side, working together; such whole models are simulated in order to achieve insight about their interaction and spot possible flaws in the controller; once validated, the very same model, without the Physical side, is compiled and into the logic controller, ready to be flashed on the controller board and to interact with the physical side. The discussed language has been implemented into a real model-based development environment, TaskScript, in use since a few years in the development of production grade systems. Results about its effectiveness in terms of model expressivity and design effort are presented; such results show the effectiveness of the approach: real case production grade systems have been developed and tested in a few days. Full article
Show Figures

Figure 1

3 pages, 671 KiB  
Correction
Correction: Sharpening the Scythe of Technological Change: Socio-Technical Challenges of Autonomous and Adaptive Cyber-Physical Systems
by Daniela Cancila, Jean-Louis Gerstenmayer, Huascar Espinoza and Roberto Passerone
Designs 2019, 3(1), 14; https://doi.org/10.3390/designs3010014 - 11 Feb 2019
Viewed by 2500
Abstract
We, the authors, wish to make the following corrections to our paper [...] Full article
Show Figures

Graphical abstract

11 pages, 1424 KiB  
Article
A Competitive Design and Material Consideration for Fabrication of Polymer Electrolyte Membrane Fuel Cell Bipolar Plates
by Noor Ul Hassan, Bahadir Tunaboylu and Ali Murat Soydan
Designs 2019, 3(1), 13; https://doi.org/10.3390/designs3010013 - 8 Feb 2019
Cited by 11 | Viewed by 6191
Abstract
The bipolar plate is one of the most significant components of a polymer electrolyte membrane (PEM) fuel cell, and contributes substantially to the cost structure and the weight of the stacks. A number of graphite polymer composites with different fabrication techniques have been [...] Read more.
The bipolar plate is one of the most significant components of a polymer electrolyte membrane (PEM) fuel cell, and contributes substantially to the cost structure and the weight of the stacks. A number of graphite polymer composites with different fabrication techniques have been reported in the literature. Graphite composites show excellent electromechanical properties and chemical stability in acidic environments. Compression and injection molding are the most common manufacturing methods being used for mass production. In this study, a competitive bipolar plate design and fabrication technique is adopted in order to develop a low-cost and light-weight expanded graphite (EG) polymer composite bipolar plate for an air-breathing PEM fuel cell. Cutting molds are designed to cut fuel flow channels on thin expanded graphite (EG) sheets (0.6 mm thickness). Three separate sheets, with the flow channel textures removed, are glued to each other by a commercial conductive epoxy to build a single bipolar plate. The final product has a density of 1.79 g/cm3. A bipolar plate with a 20 cm2 active area weighs only 11.38 g. The manufacturing cost is estimated to be 7.77 $/kWe, and a total manufacturing time of 2 minutes/plate is achieved with lab-scale fabrication. A flexural strength value of 29 MPa is obtained with the three-point bending method. A total resistance of 22.3 milliohms.cm2 is measured for the three-layer bipolar plate. We presume that the suggested design and fabrication process can be a competitive alternate for the small-scale, as well as mass production of bipolar plates. Full article
Show Figures

Figure 1

11 pages, 7488 KiB  
Article
Development of a New Span-Morphing Wing Core Design
by Peter L. Bishay, Erich Burg, Akinwande Akinwunmi, Ryan Phan and Katrina Sepulveda
Designs 2019, 3(1), 12; https://doi.org/10.3390/designs3010012 - 7 Feb 2019
Cited by 24 | Viewed by 9663
Abstract
This paper presents a new design for the core of a span-morphing unmanned aerial vehicle (UAV) wing that increases the spanwise length of the wing by fifty percent. The purpose of morphing the wingspan is to increase lift and fuel efficiency during extension, [...] Read more.
This paper presents a new design for the core of a span-morphing unmanned aerial vehicle (UAV) wing that increases the spanwise length of the wing by fifty percent. The purpose of morphing the wingspan is to increase lift and fuel efficiency during extension, to increase maneuverability during contraction, and to add roll control capability through asymmetrical span morphing. The span morphing is continuous throughout the wing, which is comprised of multiple partitions. Three main components make up the structure of each partition: a zero Poisson’s ratio honeycomb substructure, telescoping carbon fiber spars and a linear actuator. The zero Poisson’s ratio honeycomb substructure is an assembly of rigid internal ribs and flexible chevrons. This innovative multi-part honeycomb design allows the ribs and chevrons to be 3D printed separately from different materials in order to offer different directional stiffness, and to accommodate design iterations and future maintenance. Because of its transverse rigidity and spanwise compliance, the design maintains the airfoil shape and the cross-sectional area during morphing. The telescoping carbon fiber spars interconnect to provide structural support throughout the wing while undergoing morphing. The wing model has been computationally analyzed, manufactured, assembled and experimentally tested. Full article
(This article belongs to the Special Issue Design and Applications of Additive Manufacturing and 3D Printing)
Show Figures

Graphical abstract

14 pages, 4744 KiB  
Concept Paper
Design of Direct Injection Jet Ignition High Performance Naturally Aspirated Motorcycle Engines
by Albert Boretti
Designs 2019, 3(1), 11; https://doi.org/10.3390/designs3010011 - 5 Feb 2019
Cited by 1 | Viewed by 4879
Abstract
Thanks to the adoption of high pressure, direct injection and jet ignition, plus electrically assisted turbo-compounding, the fuel conversion efficiency of Fédération Internationale de l’Automobile (FIA) F1 engines has been spectacularly improved up to values above 46% peak power, and 50% peak efficiency, [...] Read more.
Thanks to the adoption of high pressure, direct injection and jet ignition, plus electrically assisted turbo-compounding, the fuel conversion efficiency of Fédération Internationale de l’Automobile (FIA) F1 engines has been spectacularly improved up to values above 46% peak power, and 50% peak efficiency, by running lean of stoichiometry stratified in a high boost, high compression ratio environment. Opposite, Federation Internationale de Motocyclisme (FIM) Moto-GP engines are still naturally aspirated, port injected, spark ignited, working with homogeneous mixtures. This old fashioned but highly optimized design is responsible for relatively low fuel conversion efficiencies, and yet delivers an outstanding specific power density of 200 kW/liter. The potential to improve the fuel conversion efficiency of Moto-GP engines through the adoption of direct injection and jet ignition, prevented by the current rules, is herein discussed based on simulations. As two-stroke engines may benefit from direct injection and jet ignition more than four-stroke engines, the opportunity of a return of two-stroke engines is also argued, similarly based on simulations. About the same power, but at a better fuel efficiency, of today’s 1000 cm3 four stroke engines, may be obtained with lean stratified direct injection jet ignition engines, four-stroke of 1450 cm3, or two-stroke of 1050 cm3. About the same power and fuel efficiency may also be delivered with stoichiometric engines direct injection jet ignition two-stroke of 750 cm3. Full article
Show Figures

Figure 1

26 pages, 28019 KiB  
Article
Probability Study on the Thermal Stress Distribution in Thick HK40 Stainless Steel Pipe Using Finite Element Method
by Sujith Bobba, Shaik Abrar and Shaik Mujeebur Rehman
Designs 2019, 3(1), 9; https://doi.org/10.3390/designs3010009 - 1 Feb 2019
Cited by 2 | Viewed by 3880
Abstract
The present work deals with the development of a finite element methodology for obtaining the stress distributions in thick cylindrical HK40 stainless steel pipe that carries high-temperature fluids. The material properties and loading were assumed to be random variables. Thermal stresses that are [...] Read more.
The present work deals with the development of a finite element methodology for obtaining the stress distributions in thick cylindrical HK40 stainless steel pipe that carries high-temperature fluids. The material properties and loading were assumed to be random variables. Thermal stresses that are generated along radial, axial, and tangential directions are generally computed using very complex analytical expressions. To circumvent such an issue, probability theory and mathematical statistics have been applied to many engineering problems, which allows determination of the safety both quantitatively and objectively based on the concepts of reliability. Monte Carlo simulation methodology is used to study the probabilistic characteristics of thermal stresses, and was implemented to estimate the probabilistic distributions of stresses against the variations arising due to material properties and load. A 2-D probabilistic finite element code was developed in MATLAB, and the deterministic solution was compared with ABAQUS solutions. The values of stresses obtained from the variation of elastic modulus were found to be low compared to the case where the load alone was varying. The probability of failure of the pipe structure was predicted against the variations in internal pressure and thermal gradient. These finite element framework developments are useful for the life estimation of piping structures in high-temperature applications and for the subsequent quantification of the uncertainties in loading and material properties. Full article
Show Figures

Figure 1

26 pages, 997 KiB  
Article
A Lazy Bailout Approach for Dual-Criticality Systems on Uniprocessor Platforms
by Saverio Iacovelli and Raimund Kirner
Designs 2019, 3(1), 10; https://doi.org/10.3390/designs3010010 - 1 Feb 2019
Cited by 4 | Viewed by 2800
Abstract
A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in the case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to [...] Read more.
A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in the case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to the criticality of tasks, the scheduling of tasks with different criticalities is called mixed-criticality scheduling. In this paper, we present the Lazy Bailout Protocol (LBP), a mixed-criticality scheduling method where low-criticality jobs overrunning their time budget cannot threaten the timeliness of high-criticality jobs while at the same time the method tries to complete as many low-criticality jobs as possible. The key principle of LBP is instead of immediately abandoning low-criticality jobs when a high-criticality job overruns its optimistic WCET estimate, to put them in a low-priority queue for later execution. To compare mixed-criticality scheduling methods, we introduce a formal quality criterion for mixed-criticality scheduling, which, above all else, compares schedulability of high-criticality jobs and only afterwards the schedulability of low-criticality jobs. Based on this criterion, we prove that LBP behaves better than the original Bailout Protocol (BP). We show that LBP can be further improved by slack time exploitation and by gain time collection at runtime, resulting in LBPSG. We also show that these improvements of LBP perform better than the analogous improvements based on BP. Full article
Show Figures

Figure 1

15 pages, 5522 KiB  
Article
Retrofit of Residential Buildings in Europe
by Giuliana Scuderi
Designs 2019, 3(1), 8; https://doi.org/10.3390/designs3010008 - 24 Jan 2019
Cited by 13 | Viewed by 4458
Abstract
Recently, many cities in Europe are encouraging the recovery of the existing residential heritage. To maximize the benefits of these campaigns, a multi-purpose campaign of architectural, functional, and structural retrofit is essential. Additionally, a fast-changing society requires new living criteria; new models need [...] Read more.
Recently, many cities in Europe are encouraging the recovery of the existing residential heritage. To maximize the benefits of these campaigns, a multi-purpose campaign of architectural, functional, and structural retrofit is essential. Additionally, a fast-changing society requires new living criteria; new models need to be developed to respond to the developing requirements of communities and markets. This paper proposes a method of analysis for 49 residential retrofit projects, a range of “best practices” presented through the definition of strategies, and actions and thematic packages, aiming at reassuming, in a systematic way, the complex panorama of the state of the art in Europe. Each project was analyzed using a data sheet, while synoptic views and tables provided key interpretations and a panorama of strategies and approaches. The analysis of the state of the art showed that lightweight interventions achieved using dry stratified construction technologies of structure/cladding/finishing are a widespread approach to renovation and requalification both for superficial/two-dimensional actions and volumetric/spatial actions. The study also highlights the leading role of the envelope within retrofit interventions. The retrofit approaches appear to reach the greatest efficiency when reversible, because only in this way do they ensure environmentally friendly actions with the possibility of dismantling. The intervention should improve the flexibility of the existing construction with a correct balance between planning for the present and planning for the future. Full article
(This article belongs to the Special Issue Integrated Sustainable Building Design, Construction and Operation)
Show Figures

Figure 1

23 pages, 2676 KiB  
Article
Adaptive Time-Triggered Multi-Core Architecture
by Roman Obermaisser, Hamidreza Ahmadian, Adele Maleki, Yosab Bebawy, Alina Lenz and Babak Sorkhpour
Designs 2019, 3(1), 7; https://doi.org/10.3390/designs3010007 - 22 Jan 2019
Cited by 14 | Viewed by 4922
Abstract
The static resource allocation in time-triggered systems offers significant benefits for the safety arguments of dependable systems. However, adaptation is a key factor for energy efficiency and fault recovery in Cyber-Physical System (CPS). This paper introduces the Adaptive Time-Triggered Multi-Core Architecture (ATMA), which [...] Read more.
The static resource allocation in time-triggered systems offers significant benefits for the safety arguments of dependable systems. However, adaptation is a key factor for energy efficiency and fault recovery in Cyber-Physical System (CPS). This paper introduces the Adaptive Time-Triggered Multi-Core Architecture (ATMA), which supports adaptation using multi-schedule graphs while preserving the key properties of time-triggered systems including implicit synchronization, temporal predictability and avoidance of resource conflicts. ATMA is an overall architecture for safety-critical CPS based on a network-on-a-chip with building blocks for context agreement and adaptation. Context information is established in a globally consistent manner, providing the foundation for the temporally aligned switching of schedules in the network interfaces. A meta-scheduling algorithm computes schedule graphs and avoids state explosion with reconvergence horizons for events. For each tile, the relevant part of the schedule graph is efficiently stored using difference encodings and interpreted by the adaptation logic. The architecture was evaluated using an FPGA-based implementation and example scenarios employing adaptation for improved energy efficiency. The evaluation demonstrated the benefits of adaptation while showing the overhead and the trade-off between the degree of adaptation and the memory consumption for multi-schedule graphs. Full article
Show Figures

Figure 1

14 pages, 872 KiB  
Article
A Two-Layer Component-Based Allocation for Embedded Systems with GPUs
by Gabriel Campeanu and Mehrdad Saadatmand
Designs 2019, 3(1), 6; https://doi.org/10.3390/designs3010006 - 19 Jan 2019
Cited by 1 | Viewed by 3231
Abstract
Component-based development is a software engineering paradigm that can facilitate the construction of embedded systems and tackle its complexities. The modern embedded systems have more and more demanding requirements. One way to cope with such a versatile and growing set of requirements is [...] Read more.
Component-based development is a software engineering paradigm that can facilitate the construction of embedded systems and tackle its complexities. The modern embedded systems have more and more demanding requirements. One way to cope with such a versatile and growing set of requirements is to employ heterogeneous processing power, i.e., CPU–GPU architectures. The new CPU–GPU embedded boards deliver an increased performance but also introduce additional complexity and challenges. In this work, we address the component-to-hardware allocation for CPU–GPU embedded systems. The allocation for such systems is much complex due to the increased amount of GPU-related information. For example, while in traditional embedded systems the allocation mechanism may consider only the CPU memory usage of components to find an appropriate allocation scheme, in heterogeneous systems, the GPU memory usage needs also to be taken into account in the allocation process. This paper aims at decreasing the component-to-hardware allocation complexity by introducing a two-layer component-based architecture for heterogeneous embedded systems. The detailed CPU–GPU information of the system is abstracted at a high-layer by compacting connected components into single units that behave as regular components. The allocator, based on the compacted information received from the high-level layer, computes, with a decreased complexity, feasible allocation schemes. In the last part of the paper, the two-layer allocation method is evaluated using an existing embedded system demonstrator; namely, an underwater robot. Full article
Show Figures

Figure 1

2 pages, 237 KiB  
Editorial
Acknowledgement to Reviewers of Designs in 2018
by Designs Editorial Office
Designs 2019, 3(1), 5; https://doi.org/10.3390/designs3010005 - 19 Jan 2019
Viewed by 1978
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
22 pages, 1746 KiB  
Article
Real-Time Behaviour Planning and Highway Situation Analysis Concept with Scenario Classification and Risk Estimation for Autonomous Vehicles
by Bence Dávid, Gergő Láncz and Gergely Hunyady
Designs 2019, 3(1), 4; https://doi.org/10.3390/designs3010004 - 15 Jan 2019
Cited by 3 | Viewed by 3498
Abstract
The development of autonomous vehicles is one of the most active research areas in the automotive industry. The objective of this study is to present a concept for analysing a vehicle’s current situation and a decision-making algorithm which determines an optimal and safe [...] Read more.
The development of autonomous vehicles is one of the most active research areas in the automotive industry. The objective of this study is to present a concept for analysing a vehicle’s current situation and a decision-making algorithm which determines an optimal and safe series of manoeuvres to be executed. Our work focuses on a machine learning-based approach by using neural networks for risk estimation, comparing different classification algorithms for traffic density estimation and using probabilistic and decision networks for behaviour planning. A situation analysis is carried out by a traffic density classifier module and a risk estimation algorithm, which predicts risks in a discrete manoeuvre space. For real-time operation, we applied a neural network approach, which approximates the results of the algorithm we used as a ground truth, and a labelling solution for the network’s training data. For the classification of the current traffic density, we used a support vector machine. The situation analysis provides input for the decision making. For this task, we applied probabilistic networks. Full article
(This article belongs to the Section Vehicle Engineering Design)
Show Figures

Figure 1

11 pages, 227 KiB  
Article
Designing Flexibility and Adaptability: The Answer to Integrated Residential Building Retrofit
by Giuliana Scuderi
Designs 2019, 3(1), 3; https://doi.org/10.3390/designs3010003 - 11 Jan 2019
Cited by 11 | Viewed by 7474
Abstract
Speaking about building retrofit in Europe, the attention is often focused on the residential building stock built after the Second World War, which represents the 75% of the total number of buildings present on the territory. Recently many cities are encouraging campaigns of [...] Read more.
Speaking about building retrofit in Europe, the attention is often focused on the residential building stock built after the Second World War, which represents the 75% of the total number of buildings present on the territory. Recently many cities are encouraging campaigns of retrofit of the housing heritage built after the Second World War, since, in terms of cost, time, financing, consumption, and sustainability, the practice appears more convenient than building anew. To maximize the benefits of these retrofit campaigns, it is essential to promote multi-purpose and innovative strategies considering contemporarily architectural, functional and structural aspects. In the field of housing, in particular, it is necessary to develop new models able to answer to the new living style of a dynamic society. In fact, today as in the past, one of the downfalls of the housing sector is failing to recognize the human dimension within the designing process. This paper evaluates past architectural practices to achieve adaptability and flexibility in the residential sector and evaluate strategies for integrated retrofit based on two macro-areas: architectural/societal/functional and structural/technological/constructional. Full article
(This article belongs to the Special Issue Integrated Sustainable Building Design, Construction and Operation)
21 pages, 1057 KiB  
Article
Quantifying Usability via Task Flow-Based Usability Checklists for User-Centered Design
by Toshihisa Doi and Toshiki Yamaoka
Designs 2019, 3(1), 2; https://doi.org/10.3390/designs3010002 - 10 Jan 2019
Viewed by 3459
Abstract
In this study, we investigated the effectiveness of a method to quantify the overall product usability using an expert review. The expert review involved a general-purpose task flow-based usability checklist that provided a single quantitative usability score. This checklist was expected to reduce [...] Read more.
In this study, we investigated the effectiveness of a method to quantify the overall product usability using an expert review. The expert review involved a general-purpose task flow-based usability checklist that provided a single quantitative usability score. This checklist was expected to reduce rating variation among evaluators. To confirm the effectiveness of the checklist, two experiments were performed. In Experiment 1, the usability score obtained using the proposed checklist was compared with traditional usability measures (task completion ration, task completion time, and subjective rating). The results demonstrated that the usability score obtained using the proposed checklist shows a tendency similar to that of the traditional measures. In Experiment 2, we investigated the inter-rater agreement of the proposed checklist by comparing it with a similar method. The results demonstrate that the inter-rater agreement of the proposed task flow-based usability checklist is greater than that of structured user interface design and evaluation. Full article
Show Figures

Figure 1

27 pages, 3033 KiB  
Article
A Computational Framework for Procedural Abduction Done by Smart Cyber-Physical Systems
by Imre Horváth
Designs 2019, 3(1), 1; https://doi.org/10.3390/designs3010001 - 25 Dec 2018
Cited by 5 | Viewed by 4600
Abstract
To be able to provide appropriate services in social and human application contexts, smart cyber-physical systems (S-CPSs) need ampliative reasoning and decision-making (ARDM) mechanisms. As one option, procedural abduction (PA) is suggested for self-managing S-CPSs. PA is a knowledge-based computation and learning mechanism. [...] Read more.
To be able to provide appropriate services in social and human application contexts, smart cyber-physical systems (S-CPSs) need ampliative reasoning and decision-making (ARDM) mechanisms. As one option, procedural abduction (PA) is suggested for self-managing S-CPSs. PA is a knowledge-based computation and learning mechanism. The objective of this article is to provide a comprehensive description of the computational framework proposed for PA. Towards this end, first the essence of smart cyber-physical systems is discussed. Then, the main recent research results related to computational abduction and ampliative reasoning are discussed. PA facilitates beliefs-driven contemplation of the momentary performance of S-CPSs, including a ‘best option’-based setting of the servicing objective and realization of any demanded adaptation. The computational framework of PA includes eight clusters of computational activities: (i) run-time extraction of signals and data by sensing, (ii) recognition of events, (iii) inferring about existing situations, (iv) building awareness of the state and circumstances of operation, (v) devising alternative performance enhancement strategies, (vi) deciding on the best system adaptation, (vii) devising and scheduling the implied interventions, and (viii) actuating effectors and controls. Several cognitive algorithms and computational actions are used to implement PA in a compositional manner. PA necessitates not only a synergic interoperation of the algorithms, but also an objective-dependent fusion of the pre-programmed and the run time acquired chunks of knowledge. A fully fledged implementation of PA is underway, which will make verification and validation possible in the context of various smart CPSs. Full article
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop