238 |
Expert Systems with Applications, Volume 37, Issue 1, January 2010, Pages 878-885
Te-Sheng Li, Chih-Ming Hsu
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
The major research focus on integrated circuits (ICs) mainly deals with increasing circuit performance and functional complexity of circuit. The lithography process is the most critical step in the fabrication of nanostructure for integrated circuit manufacturing. The most important variable in the lithography process is the line-width or critical dimensions (CDs), which perhaps is one of the most direct impact variables on the device performance and speed. This study presents a hybrid approach combining Taguchi’s robust design, back-propagation neural network modeling technique and particle swarm optimization (PSO) for sub-35 nm contact-hole fabrication in the lithography process. The BP neural network is employed to model the functional relationship between the input parameters and target responses. Particle swarm optimization is adopted to optimize the parameter settings through the well-trained BP model, where each particle is assessed using fitness function. The proposed PSO algorithm applies the velocity updating and position updating formulas to the population composed of many particles such that better particles are generated. Compared with realistic fabricated and measured data, this approach can achieve the optimal parameter settings for minimized CDs or target CDs. Meanwhile, it reduces the CD variation through the design of experiment. The experimental results show that the proposed approach dealing with the process modeling and parameter optimization demonstrates its feasibility and effectiveness for sub-35 nm contact-hole fabrication.
Article Outline
1. Introduction
2. Experimental set-up and problem description
2.1. Experimental set-up
2.2. Problem description
3. An optimization model for lithography process
3.1. Back-propagation neural network
3.2. Particle swarm optimization
3.3. The fitness function with desirability functions
3.4. The proposed approach
4. Experimental results
4.1. Modeling using BP neural network
4.2. Optimizing parameters for minimized CDs through PSO
4.3. Confirmation experiment
4.4. Discussion
5. Conclusion
References | Purchase |
239 |
Journal of Biotechnology, Volume 148, Issue 1, 1 July 2010, Pages 70-75
Frank Sonntag, Niels Schilling, Katja Mader, Mathias Gruchow, Udo Klotzbach, Gerd Lindner, Reyk Horland, Ilka Wagner, Roland Lauster, Steffen Howitz, Silke Hoffmann, Uwe Marx
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
Dynamic miniaturized human multi-micro-organ bioreactor systems are envisaged as a possible solution for the embarrassing gap of predictive substance testing prior to human exposure. A rational approach was applied to simulate and design dynamic long-term cultures of the smallest possible functional human organ units, human “micro-organoids”, on a chip the shape of a microscope slide. Each chip contains six identical dynamic micro-bioreactors with three different micro-organoid culture segments each, a feed supply and waste reservoirs. A liver, a brain cortex and a bone marrow micro-organoid segment were designed into each bioreactor. This design was translated into a multi-layer chip prototype and a routine manufacturing procedure was established. The first series of microscopable, chemically resistant and sterilizable chip prototypes was tested for matrix compatibility and primary cell culture suitability. Sterility and long-term human cell survival could be shown. Optimizing the applied design approach and prototyping tools resulted in a time period of only 3 months for a single design and prototyping cycle. This rapid prototyping scheme now allows for fast adjustment or redesign of inaccurate architectures. The designed chip platform is thus ready to be evaluated for the establishment and maintenance of the human liver, brain cortex and bone marrow micro-organoids in a systemic microenvironment.
Article Outline
1. Introduction
2. Materials and methods
2.1. Oxygen supply
2.2. Design of the human liver culture segment
2.3. Design of the bone marrow culture segment
2.4. Design of the brain cortex culture segment
2.5. Process design
2.6. Prototype manufacturing
2.7. Non-invasive in process controls
2.8. Cell culture proof for prototypes
3. Results
4. Discussion
References | Purchase |
240 |
Toxicology in Vitro, In Press, Corrected Proof, Available online 15 December 2010
K. Schroeder, K.D. Bremm, N. Alépée, J.G.M. Bessems, B. Blaauboer, S.N. Boehn, C. Burek, S. Coecke, L. Gombau, N.J. Hewitt, J. Heylings, J. Huwyler, M. Jaeger, M. Jagelavicius, N. Jarrett, H. Ketelslegers, I. Kocina, J. Koester, J. Kreysa, R. Note, et al.
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
There are now numerous in vitro and in silico ADME alternatives to in vivo assays but how do different industries incorporate them into their decision tree approaches for risk assessment, bearing in mind that the chemicals tested are intended for widely varying purposes? The extent of the use of animal tests is mainly driven by regulations or by the lack of a suitable in vitro model. Therefore, what considerations are needed for alternative models and how can they be improved so that they can be used as part of the risk assessment process? To address these issues, the European Partnership for Alternative Approaches to Animal Testing (EPAA) working group on prioritisation, promotion and implementation of the 3Rs research held a workshop in November, 2008 in Duesseldorf, Germany. Participants included different industry sectors such as pharmaceuticals, cosmetics, industrial- and agro-chemicals. This report describes the outcome of the discussions and recommendations (a) to reduce the number of animals used for determining the ADME properties of chemicals and (b) for considerations and actions regarding in vitro and in silico assays. These included: standardisation and promotion of in vitro assays so that they may become accepted by regulators; increased availability of industry in vivo kinetic data for a central database to increase the power of in silico predictions; expansion of the applicability domains of in vitro and in silico tools (which are not necessarily more applicable or even exclusive to one particular sector) and continued collaborations between regulators, academia and industry. A recommended immediate course of action was to establish an expert panel of users, developers and regulators to define the testing scope of models for different chemical classes. It was agreed by all participants that improvement and harmonization of alternative approaches is needed for all sectors and this will most effectively be achieved by stakeholders from different sectors sharing data.
Article Outline
1. Introduction
2. Regulatory framework and views from regulators
2.1. Pharmaceuticals
2.2. Chemicals
2.3. Pesticides
2.4. Cosmetics
3. Use of in vivo and in vitro assays for risk assessment by different industry sectors
4. Information gained from in vitro assays
4.1. Absorption/exposure
4.2. Metabolic pathway identification and clearance
4.3. Selection of most relevant species
4.4. Drug–drug interactions in pharmaceutical safety evaluation
4.5. Mechanistic understanding
5. Recommendations from the workshop for in vitro assays
5.1. Apply considerations for in vitro assays (all sectors)
5.2. Standardize and promote in vitro assays (all sectors)
5.3. Avoidance of false positives (all sectors)
5.4. Mechanism-based toxicity assays should be further developed and validated (all sectors)
5.5. Actions related to in vitro assays (all sectors)
6. Information gained from in silico models
7. Recommendations from the workshop for in silico assays
7.1. More extensive use of QSAR by all industry sectors
7.2. Further development of PBBK models (all sectors)
8. Recommendations by the workshop for refining and reducing in vivo assays
8.1. Microdosing (pharmaceutical sector)
8.2. Incorporation of additional endpoints (chemical sector)
8.3. Issues in information sharing (all sectors)
9. Proposed actions from the ADME workshop to develop further alternative models and the 3R concept
9.1. Immediate actions
9.2. Additional actions
10. Conclusions
Conflict of interest
Acknowledgements
References | Purchase |
241 |
Mathematical and Computer Modelling, Volume 52, Issues 7-8, October 2010, Pages 1169-1176
F. Sánchez Lasheras, J.A. Vilán Vilán, P.J. García Nieto, J.J. del Coz Díaz
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
The hard chromium plating process aims at creating a coating of hard and wear-resistant chromium with a thickness of some micrometres directly on the metal part without the insertion of copper or nickel layers. Chromium plating features high levels of hardness and resistance to wear and it is due to these properties that they can be applied in a huge range of sectors. Resistance to corrosion of a hard chromium plate depends on the thickness of its coating, and its adherence and micro-fissures. This micro-fissured structure is what provides the optimal hardness of the layers. The hard chromium plating process is one of the most effective ways of protecting the base material against a hostile environment or improving the surface properties of the base material. However, in the electroplating industry, electroplaters are faced with many problems and undesirable results with chromium plated materials. Common problems faced in the electroplating industry include matt deposition, milky white chromium deposition, rough or sandy chromium deposition and insufficient thickness and hardness. This article presents an artificial neural network (ANN) model to predict the thickness of the layer in a hard chromium plating process. The optimization of the ANN was performed by means of the design of experiments theory (DOE). In the present work the purpose of using DOE is twofold: to define the optimal experiments which maximize the ratio of the model accuracy, and to minimize the number of necessary experiments (ANN models trained and validated).
Article Outline
1. Introduction
2. The aim of the present work
3. Mathematical model
3.1. Multilayer perceptron
3.2. Numerical simulation based on design of experiments (DOE)
4. Experimental design
5. Results
6. Discussion and conclusions
Acknowledgements
References | Purchase |
242 |
Technovation, Volume 30, Issues 11-12, November-December 2010, Pages 627-634
Gordon Müller-Seitz, Guido Reger
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
At present, several initiatives have emerged that claim to be innovative while acting according to the mechanisms of open source software (OSS), a field frequently deemed to be a role model for open innovation. Against this background, this study focuses on a case study of the development of an automobile. Based on a commons-based peer production-informed perspective, we show that this project displays a variety of characteristics that are usually associated with OSS projects. In particular, parallels can be drawn between the intrinsic and extrinsic motivations, the ability to ‘broadcast’ ideas due to the virtual nature of the tasks, and the self-selection of tasks due to their modular nature. The drawing of such parallels, however, must be done cautiously because diverse factors, such as opportunity costs, regulations, and feasibility studies, limit the applicability of OSS principles to this non-software related network of dispersed voluntary contributors within a commons-based peer production framework. Herein, we attempt to clarify how OSS projects can and cannot work as role models for open innovation in the automotive as well as other product-oriented industries.
Article Outline
1. Introduction
2. A survey of studies on open source software
2.1. Basic features of OSS projects
2.2. Motivational stimuli
3. Commons-based peer production as a conceptual frame of reference
4. Research setting
5. Method
6. Discussion of the findings in the light of previous OSS research and the CBPP framework
6.1. Resemblances between the OSS realm and the OScar project
6.2. Limitations regarding the applicability of OSS mechanisms to other arenas
7. Implications for practice and theory
8. Concluding remarks
References | Purchase |
243 |
Agricultural Systems, Volume 103, Issue 7, September 2010, Pages 444-452
Kursat Demiryurek
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
This research presents an analysis of the agricultural information systems and communication networks for organic and conventional hazelnut producers in the Samsun province of Turkey. Structured interviews were used to collect data from randomly selected conventional and all 39 organic hazelnut producers living in the study area. Information systems for organic and conventional producers were found to be different. Organic producers benefited from more information sources than conventional producers. In addition, the contract farming approach to organic agriculture had initially isolated organic producers from conventional producers. Furthermore, dissatisfaction from the organic marketing company and its organic production project resulted in further separation among organic producers and led some of them to establish their own union. The lack of access to information and support from the organic project-related sources, professional institutions and mass media sources was evident. This resulted in the development of social sources to exchange information among the producers within their villages. However, this information is mainly based on traditional practices rather than scientific applications. Thus, more functional cooperation and professional communication between personal and institutional information sources are needed to enhance the diffusion of information and technology among farmers.
Article Outline
1. Introduction
1.1. Organic agriculture and hazelnut production in Turkey
1.2. Literature review
2. Methods
2.1. Study area
2.2. Interviews and sampling
2.3. Total Information Score
2.4. Statistical methods
3. Results
3.1. Socio-economic characteristics
3.2. Information systems
3.3. Communication networks
4. Discussion
5. Conclusion
Acknowledgements
References | Purchase |
244 |
Annals of Tourism Research, Volume 37, Issue 4, October 2010, Pages 1141-1163
David Matarrita-Cascante
Show preview | Related articles | Related reference work articles | Purchase |
245 |
International Journal of Information Management, Volume 31, Issue 2, April 2011, Pages 121-127
Tércia Zavaglia Torres, Ivo Pierozzi Jr., Nadir Rodrigues Pereira, Alexandre de Castro
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
The great contemporary organizational challenge for enterprises is to create a conceptual and methodological framework allows the management of knowledge by means of networks designed for social interaction. This statement is based on the premise that the competitive drive and sustainable success of the company depend on the introduction of new forms of production innovative processes, which can only be ensured through integrated approaches to knowledge management and the incorporation information technologies (IT). This is a reality that has already been accepted by the Brazilian Agricultural Research Corporation (Embrapa, its acronym in Portuguese), a Brazilian research, development, and innovation (RD&I) institution supporting agricultural sector. For some years now, Embrapa has been incorporating what it has learned about knowledge management into its strategic planning process. In this paper, we present a new approach to managing knowledge and information, and we analyze the need for research institutions to administer the knowledge they produce through an RD&I management model based multi- and inter-disciplinary teams, and multi-institutional research networks.
Article Outline
1. Introduction
2. Management system research at Embrapa
2.1. Embrapa Management System (SEG)
3. Knowledge management and communication: interfaces and dialogs
3.1. Business challenges and communication
3.2. INTAGRO: a project for interactive governance
4. Conclusion
References
Vitae | Purchase |
246 |
Expert Systems with Applications, In Press, Corrected Proof, Available online 28 January 2011
Esmaeil Hadavandi, Hassan Shavandi, Arash Ghanbari
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
Success in forecasting and analyzing sales for given goods or services can mean the difference between profit and loss for an accounting period and, ultimately, the success or failure of the business itself. Therefore, reliable prediction of sales becomes a very important task. This article presents a novel sales forecasting approach by the integration of genetic fuzzy systems (GFS) and data clustering to construct a sales forecasting expert system. At first, all records of data are categorized into k clusters by using the K-means model. Then, all clusters will be fed into independent GFS models with the ability of rule base extraction and data base tuning. In order to evaluate our K-means genetic fuzzy system (KGFS) we apply it on a printed circuit board (PCB) sales forecasting problem which has been used as the case in different studies. We compare the performance of an extracted expert system with previous sales forecasting methods using mean absolute percentage error (MAPE) and root mean square error (RMSE). Experimental results show that the proposed approach outperforms the other previous approaches.
Article Outline
1. Introduction and literature review
1.1. PCB sales forecasting
2. K-means genetic fuzzy system
2.1. K-means clustering analysis
2.2. Developing a genetic fuzzy system
2.2.1. Genetic derivation of the rule base for FRBS
2.2.2. Genetic tuning of data base
3. Experimental results
3.1. Constructing KGFS sales forecasting ES
3.2. Comparisons of KGFS model with other previous models
4. Conclusion
References | Purchase |
► We present a novel sales forecasting approach by integration of Genetic Fuzzy Systems (GFS) and Data Clustering. ► We use the K-means model to construct the K-means genetic fuzzy system (KGFS). ► We evaluate the performance of the developed expert system by Printed Circuit Board (PCB) sales forecasting problem. ► We compare the performance of presented KGFS against the previous sales forecasting methods and show the results.
247 |
Technovation, Volume 30, Issue 3, March 2010, Pages 181-194
S.X. Zeng, X.M. Xie, C.M. Tam
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
The complexity of innovation processes led to a tremendous growth in the use of external networks by small- and medium-sized enterprises (SMEs). Based on a survey to 137 Chinese manufacturing SMEs, this paper empirically explores the relationships between different cooperation networks and innovation performance of SME using the technique of structural equation modeling (SEM). The study finds that there are significant positive relationships between inter-firm cooperation, cooperation with intermediary institutions, cooperation with research organizations and innovation performance of SMEs, of which inter-firm cooperation has the most significant positive impact on the innovation performance of SMEs. Surprisingly, the result reveals that the linkage and cooperation with government agencies do not demonstrate any significant impact on the innovation performance of SMEs. In addition, these findings confirm that the vertical and horizontal cooperation with customers, suppliers and other firms plays a more distinct role in the innovation process of SMEs than horizontal cooperation with research institutions, universities or colleges, and government agencies.
Article Outline
1. Introduction
2. Literature review
3. Hypothesis development
3.1. Government agencies
3.2. Inter-firm cooperation
3.3. Intermediary institutions
3.4. Research organizations
4. Research methodology
4.1. Background
4.2. Definition of SMEs
4.3. Survey design and data source
4.4. Measurement
4.5. Methodology
4.6. The sample
5. Results and analysis
5.1. Preliminary analysis
5.2. Measurement model
5.3. Causal model
6. Conclusions
Acknowledgements
Appendix. Appendix
References | Purchase |
248 |
Planetary and Space Science, Volume 59, Issue 4, March 2011, Pages 343-354
Ryuhei Yamada, Raphaël F. Garcia, Philippe Lognonné, Mathieu Le Feuvre, Marie Calvet, Jeannine Gagnepain-Beyneix
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
Fundamental scientific questions concerning the internal structure and dynamics of the Moon, and their implications on the Earth–Moon System, are driving the deployment of a new broadband seismological network on the surface of the Moon. Informations about lunar seismicity and seismic subsurface models from the Apollo missions are used as a priori information in this study to optimise the geometry of future lunar seismic networks in order to best resolve the seismic interior structure of the Moon. Deep moonquake events and simulated meteoroid impacts are the assumed seismic sources. Synthetic P and S wave arrivals computed in a radial seismic model of the Moon are the assumed seismic data. The linearised estimates of resolution and covariance of radial seismic velocity perturbations can be computed for a particular seismic network geometry. The non-linear inverse problem relating the seismic station positions to the linearised estimates of covariance and resolution of radial seismic velocity perturbations is written and solved by the Neighbourhood Algorithm. This optimisation study favours near side seismic station positions at southern latitudes in order to constrain the deep mantle structure from deep moonquake data at large epicentral distances. The addition of a far side station allows to divide by two the size of the error bar on the seismic velocity model. The monitoring of lunar impact flashes from the Earth allows to improve the radial seismic model in the top of the mantle by adding much more meteor impact data at short epicentral distances due to the high accuracy of the space/time location of these seismic sources. Such meteor impact detections may be necessary to investigate the 3D structure of the lunar crust.
Article Outline
1. Introduction
2. A priori information
2.1. Lunar seismic model
2.2. Lunar seismicity
2.2.1. Deep moonquakes
2.2.2. Meteor impacts
2.3. Constraints on the network geometry
3. Inverse problem
3.1. Direct problem formulation
3.2. Cost function
3.3. Optimisation method
4. Results
4.1. Three stations on the near side
4.2. Adding a far side seismic station
4.3. Improvement due to lunar impact monitoring
5. Discussion
5.1. Recommendations for future lunar seismic networks
5.2. Limitations
6. Conclusion
Acknowledgements
References | Purchase |
► Optimisation of the seismic network geometry improves the seismic model. ► Optimum seismic station positions are favoured in South/West of the near side. ► Adding a far side seismic station divides by two the uncertainties of the model. ► Locating lunar impact flashes from the Earth improves a lot the seismic model. ► The crust and upper mantle are properly determined by using impact flashes.
249 |
Fluid Phase Equilibria, In Press, Corrected Proof, Available online 21 September 2010
Carlos-Axel Díaz-Tovar, Rafiqul Gani, Bent Sarup
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
In this work some of the property related issues in lipid processing technology employed in edible oil and biodiesel production are highlighted. This includes the identification of the most representative chemical species (acylglycerides, free fatty acids, tocopherols, sterols, carotenes, and fatty acid methyl esters); their representation and classification in terms of molecular structures; the collection of available experimental data of their pure component physical properties; the adoption of appropriate property-process models for the design and analysis of production processes through computer-aided tools like process simulation. Whenever experimental data were not available, the property models based on the group contribution approach (GC) was employed to fill-out the gaps in the database. This included pure component single-value properties (for example, the normal melting point temperature or the critical pressure) as well as temperature dependent properties (for example, vapor pressure, liquid density, and liquid viscosity). Whenever enough data was not available, the PC-SAFT EoS was used to generate pseudo-experimental data for the temperature dependent properties for regression of the GC-based model parameters. The filled database and property models have been employed through a process simulation to analyze the design issues of typical edible oil processes.
Article Outline
1. Introduction
2. Lipid processing technology: the edible oil industry
3. Lipid-database and property prediction
3.1. Database – collection of experimental data
3.2. Property models
3.2.1. Pure component single value property model
3.2.1.1. Model performance – Marrero and Gani model
3.2.2. Temperature dependent properties (vapor pressure, liquid heat capacity, liquid viscosity, and surface tension)
3.2.3. Temperature dependent pure and mixture property – density
3.2.4. Use of the GC-based PC-SAFT EoS
3.3. Database for external software
4. Case study
4.1. Process simulation
4.2. Model-based simulation for process validation
5. Conclusions and future perspectives
Acknowledgements
Appendix A. Calculation of vapor pressure of Tripalmitin at 512.15 K through the GTD-model
Appendix B. Calculation of liquid heat capacity of stearic acid at 350 K through the GTD-model
Appendix C. Calculation of liquid viscosity of methyl myristate at 333.15 K through the GTD-model
Appendix D. Calculation of surface tension of monopalmitin at 413.15 K through the GTD-model
Appendix E. Calculation of liquid density of Brazil nut at 413.15 K through the modified Rackett equation
References | Purchase |
250 |
Computers & Industrial Engineering, Volume 59, Issue 4, November 2010, Pages 585-594
Xiuli Geng, Xuening Chu, Deyi Xue, Zaifang Zhang
Close preview | Related articles | Related reference work articles
AbstractAbstract | Figures/TablesFigures/Tables | ReferencesReferences
Abstract
Product-service system (PSS) approach has emerged as a competitive strategy to impel manufacturers to offer a set of product and services as a whole. A three-domain PSS conceptual design framework based on quality function deployment (QFD) is proposed in this research. QFD is a widely used design tool considering customer requirements (CRs). Since both product and services influence satisfaction of customer, they should be designed simultaneously. Identification of the critical parameters in these domains plays an important role. Engineering characteristics (ECs) in the functional domain include product-related ECs (P-ECs) and service-related ECs (S-ECs). ECs are identified by translating customer requirements (CRs) in the customer domain. Rating ECs’ importance has a great impact on achieving an optimal PSS planning. The rating problem should consider not only the requirements of customer, but also the requirements of manufacturer. From the requirements of customer, the analytic network process (ANP) approach is integrated in QFD to determine the initial importance weights of ECs considering the complex dependency relationships between and within CRs, P-ECs and S-ECs. In order to deal with the vagueness, uncertainty and diversity in decision-making, the fuzzy set theory and group decision-making technique are used in the super-matrix approach of ANP. From the requirements of manufacturer, the data envelopment analysis (DEA) approach is applied to adjust the initial weights of ECs taking into account business competition and implementation difficulty. A case study is carried out to demonstrate the effectiveness of the developed integrated approach for prioritizing ECs in PSS conceptual design.
Article Outline
1. Introduction
2. Literature review
2.1. The applications of ANP and DEA in prioritizing ECs
2.2. The applications of fuzzy theory and group decision-making in the ANP approach
3. The proposed approach for rating ECs’ final importance
3.1. Construction of the ANP network model of HOQ
3.2. Calculation of the initial importance weights of ECs
3.3. Determination of the final importance weights of ECs using DEA
4. A case study
5. Conclusions
Acknowledgements
References | Purchase | |
results 226 - 250
|
Edit this search | Save this search | Save as search alert | RSS Feed
窗体底端