Software engineering research and practice 2012




















However, more strikingly, software systems have a strong tendency to grow over time, both in size and complexity. In order to deal with size and complexity, software architectures are used.

A software architecture provides a global description of an actually far more detailed software system by giving an overview in terms of components and links. Components are the main relevant parts, links are the relevant connections between them. Comparative studies of metamodeling and AI-Based techniques in damage detection of structures. Abstract Despite advances in computer capacity, the enormous computational cost of running complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of structural health monitoring.

To cut down To cut down the cost, surrogate models, also known as metamodels, are constructed and then used in place of the actual simulation models. In this study, structural damage detection is performed using two approaches.

In the first approach, by considering dynamic behavior of a structure as input variables, ten metamodels are constructed, trained and tested to detect the location and severity of damage in civil structures.

The variation of running time, mean square error MSE , number of training and testing data, and other indices for measuring the accuracy in the prediction are defined and calculated in order to inspect advantages as well as the shortcomings of each algorithm.

In the second approach, after locating precisely the eventual damage of a structure using modal strain energy based index MSEBI , to efficiently reduce the computational cost of model updating during the optimization process of damage severity detection, the MSEBI of structural elements is evaluated using a properly trained surrogate model.

The results indicate that after determining the damage location, the proposed solution method for damage severity detection leads to significant reduction of computational time compared to finite element method. Furthermore, engaging colliding bodies optimization algorithm CBO by efficient surrogate model of finite element FE model, maintains the acceptable accuracy of damage severity detection. Software operation time evaluation based on MTM.

Techniques have been focused on design recommendations not always supported by empirical evidences. However, in In this paper, a new method for measuring required operation time as basis for improving interaction and interface design is presented. The basis of the method has been inspired on the adaptation of the well-known MTM method for time measurement and task analysis in industrial environments.

Empirical evidences support the relationship between the proposed measure and the time required for operations by software users. Language Support for Optional Functionality. We recommend a programming construct - availability check - for programs that need to automatically adjust to presence or absence of segments of code.

The idea is to check the existence of a valid definition before a function call is The idea is to check the existence of a valid definition before a function call is invoked. The vision is to enable customization of application functionality through addition or removal of optional components, but without requiring complete re-building. Essentially, our approach attempts to combine the flexibility of dynamic libraries with the usability of utility dependency libraries. We outline the benefits over prevalent strategies mainly in terms of development complexity, crudely measured as lesser lines of code.

We also allude to performance and flexibility facets. A Preliminary implementation and figures from early experimental evaluation are presented. Coordinating Functional Processes with Haskell. This paper presents Haskell ,aparallel functional language based on coordination.

Haskell supports lazy stream communication andfacilities, at coordination level, to the specification of data parallel programs. Haskell supports a clean Haskell supports a clean and complete, semantic and syntactic, separation between coordination and computation levels of programming, with several benefits to parallel program engineering.

The implementation of some well-known applications in Haskell is presented, demonstrating its expressiveness, allowing for elegant, simple, and concise specification ofanystatic pattern of parallel, concurrent or distributed computation. This experiment gave us important feedback for This experiment gave us important feedback for evaluatingHaskell features, helping us to answer some questions, like how expressive is Haskell for representing known parallel computational patterns, how easy it is tobuild large scale parallel programs in an elegant and concise way, and ho w efficient are Haskell programs.

Based on our conclusions, we suggest new features to be incorporated inHaskell to improve its expressiveness and performance. We also present the performance figures for the MCP-Haskell benchmark.

It has now becomede factostandard for the non-strict ACM Trans. View 2 excerpts, cites background and methods. Teaching Software Maintenance. Highly Influenced. View 13 excerpts, cites background and methods. Open source projects in software engineering education: a mapping study. View 7 excerpts, cites background. Software Process Line Modeling and Evolution. Companies formalize their software processes as a way of organizing their development projects.

These companies usually work with families of processes, since differences in project requirements and … Expand.

Change-Effects Analysis for Evolving Software. View 1 excerpt. Evolving Software Systems. Springer Berlin Heidelberg. In this blog, we bring you some tips, tricks, and recommendations to help you adopt best practices from the software engineering industry. The purpose of this article is to create awareness among coders that writing clear, concise code is the way to go. In the software engineering community, standardized coding conventions help keep the code relevant and useful for clients, future developers, and the coders themselves.

Any professional programmer will tell you that the majority of their time is spent reading code , rather than writing it.

You are more likely to work on existing code than to write something from scratch. Writing highly optimized mathematical routines, or creating complex libraries, is relatively easy.

Writing lines of code that can be instantly understood by another software engineer is more of a challenge. It may seem like extra effort at the time, but this additional work will pay dividends in the future.

It will make your life so much easier when returning to update your code. In addition, the debugging process should be much smoother for yourself, or for other engineers who need to edit your work.

Professionally written code is clean and modular. It is easily readable, as well as logically structured into modules and functions. Using modules makes your code more efficient, reusable, and organized. Always remember, future-proofing your code in this way should always be prioritized over finishing quickly. You may think you are saving time by hacking away, but in fact, you are creating hours of extra work down the line. In order to optimize your code, you need to make sure it executes the function quickly.

In the world of software engineering, writing code quickly and correctly is pointless if the end product is slow and unstable.

This is especially true in large, complex programs. Even the smallest amount of lag can add up considerably, rendering your application - and all of your engineering work - useless. Equally important is minimizing the memory footprint of your code. In terms of performance, working with many functions on a large amount of data can drastically reduce the efficiency of your code. Refactoring is basically improving the structure of your code, without making modifications to its actual functionality.



0コメント

  • 1000 / 1000