Redox Induced Fluorescence On–Off Switching Based on Nitrogen

Tunable optical and fluorescent properties of graphene quantum dots (GQDs) by heteroatom doping and surface functionalization provide tremendous ...
0 downloads 0 Views 1MB Size
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

InnovA: A Cognitive Architecture for Computational Innovation through Robust Divergence and Its Application for Analog Circuit Design Hao Li, Student Member, IEEE, Xiaowei Liu, Fanshu Jiao, Member, IEEE, Alex Doboli, Senior Member, IEEE, Simona Doboli, Member, IEEE

Abstract—This paper presents InnovA, a cognitive architecture for creative problem solving in analog circuit design, e.g., topology creation (synthesis), incremental topology modification, and design knowledge identification and reuse. The architectural modules attempt to replicate cognitive human activities, like concept formation, comparison, and concept combination. The architecture uses multiple knowledge representations organized using topological similarity and causality information. Solutions are clustered, so that each cluster represents a specific set of performance tradeoffs, thus a fragment of the Pareto front in the solution space. New structural features are created through variation of existing features. New solutions are created by combining features from the same cluster, distinct clusters, and features that originate a new cluster. The related algorithms are discussed in the paper too. The architecture also incorporates modules mimicking the using of human emotions in memory formation and decision making, but these modules are still under development and are a main direction for our future work. The paper presents a number of examples to illustrate its using in various analog circuit design activities that are hard to be realized with traditional computational methods.

Keywords: cognitive architecture, robust divergence, constrained model, analog circuit design I. I NTRODUCTION Creative problem solving is critical in engineering innovation, in particular in circuit design innovation. Engineering innovation tackles open-ended as well as ill-defined design problems [1]–[3]. Open-ended problems require creating solutions with characteristics beyond the current domain knowledge, e.g., new building blocks, topologies (structures), constraints, and operation principles. Ill-defined problems introduce new requirements to an already existing problem, while the new requirements cannot be tackled by simply exploring the tradeoffs of the existing solutions. Creative problem solving not only produces solutions to novel problems, but also generates new domain knowledge that is coherent with existing information and can be reused for future problems. This material is based upon work supported by the National Science Foundation under Grant BCS 1247971. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. H. Li, X. Liu, F. Jiao, A. Doboli are with Department of Electrical and Computer Engineering, State University of New York at Stony Brook, Stony Brook, NY 11794-2350, Email: [email protected] S. Doboli is with Department of Computer Science, Hofstra University, Hempstead, NY, Email: [email protected]

Creative problem solving is difficult to formalize as an algorithmic procedure. The first efforts were centered around Global Problem Solver (GPS) algorithm, which performs a set of computational steps to reduce the gap between problem requirements and solution characteristics [4]. While this approach mimics to some degree the cognitive, goal-directed reasoning process [5], [6], devising computing systems that can address complex, real-world problems remains challenging. The second approach includes expert systems. They use a database of if-then rules to indicate the solving steps that are applied in specific conditions. Expert systems have been used to successfully solve certain circuit design problems [7]–[9], however, building effective rule databases and rule selection (resolution) strategies remains difficult. Traditionally, static databases and selection rules have been used, but their nature does not accommodate well the adaptive nature of creative problem solving. The third approach utilizes evolutionary approaches, like genetic algorithms (GA) and memetic algorithms [10]–[12]. The expectation is that design innovation emerges by applying genetic operators, like mutation and combination. However, studies show that GA-based CAD methods create circuit design solutions of significantly less quality and reusability than human designers [13]. We believe that these differences are hard to tackle with traditional optimizationbased synthesis. A. Overview on Cognitive Architecture Alternatively, cognitive architectures have been a promising approach in performing tasks specific to human cognition, difficult to handle through traditional, procedural methods. Cognitive architectures include various kinds of knowledge representations, and memory organization and retrieval mechanisms. The related methods perform knowledge classification, summarization, and comparison as well as techniques for decision making, prediction, learning, and goal setting. Examples of cognitive architectures are discussed in [14]– [18]. SOAR architecture models cognition-based problem solving [17], e.g., robot navigation. ACT-R architecture proposes a number of important innovations: multiple ways of symbolic knowledge representations, learning of declarative and procedural information, and utility-based decision making [15]. EPIC architecture models peripheral cognition of perceptual biological systems [19]. Sigma Cognitive Architecture utilizes mixed representation models (symbolic - probabilistic, discrete - continuous), knowledge summarization and integration, and

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

2

Clustering based on similarity cluster

incremental evolution

kernel

1

cluster

kernel

1

cluster

3 kernel

2 2

Pareto front fragment 1

Evolution using migrated features (strategy 2)

Convergent evolution

Fig. 1.

4

kernel

3

X Pareto front fragment

Evolution using own knowledge (strategy 1)

cluster

New niche 2

Evolution using combined features (strategy 3)

X

4

Evolution using alternative features (strategy 4)

Alternative niche X Excluded niche

cluster kernel

5

5

Individual solutions

Evolution through kernel aggregation (strategy 5)

Divergent evolution

Main elements of the model supporting the cognitive architecture

inference-based reasoning [20]. Clarion architecture distinguishes between explicit and implicit cognition, each using specific representations and processing methods [21]. Other cognitive architectures are discussed in [22]–[25]. It is hard to estimate the suitability of existing cognitive architectures for creative problem solving in circuit design, as they do not refer to analog circuit design knowledge and do not emphasize robust divergence beyond the limits of existing knowledge. However, robust divergence is critical in tackling new goals and tradeoffs by inventing new building blocks and structures for the solutions. The cognitive architecture proposed in this paper identifies invariant ideas (patterns), structured idea sets, and idea structuring methods specific to the solution evolving processes characteristic to open-ended or ill-defined problem solving activities. Creative activities are defined as continuous processes of adapting existing knowledge to new problem requirements. The cognitive architecture learns reliable approximations and heuristics that can correctly tackle the complexity of the solution space, e.g., feature variations of the same knowledge concept and available ways of concept structuring to address certain problem requirements. The learned knowledge is encoded as reusable knowledge either explicitly as new building blocks, causal information between parameters and outcomes, reasoning strategies, preferences, and beliefs, or implicitly as the cognitive architecture parameters learned during operation. B. The Proposed Cognitive Architecture The proposed cognitive architecture (InnovA) addresses creative problem solving for analog circuit design, such as circuit topology creation, incremental topology modification, and design knowledge identification and reuse. We argue that these techniques are difficult to address using current methods. The architecture computationally mimics the cognitive functions considered by work in psychology to be critical in creativity and innovation, like concept formation, comparison, and concept combination [26], [27]. It includes different knowledge representations (similarity-based associations, outcome-related associations, and causal justifications) organized as short-term, long-term, episodic, subjective, and context-dependent memories. Multiple knowledge representations are stored in the memories organized as features and concepts (circuit building blocks (BBs)) at various levels of abstraction. New solutions are created by combining features and concepts using five generic strategies and by varying the structure of existing BBs. New BBs are candidates for future reuse, and are automatically

recognized. The paper summarizes the related algorithms and presents circuit design tasks performed using the methodology of the architecture. The architecture also includes modules related to emotion-based processing and knowledge restructuring when knowledge organization produces many inaccurate predictions, but the related algorithms are currently under development [28]. The presented cognitive architecture integrates and completes the individual methods and knowledge structures detailed in our previous publications [29]–[35]. Compared to our previous work, this paper indicates how the individual procedures and knowledge representations are utilized together in the cognitive architecture as well as the main characteristics and constraints of the architecture. These aspects are discussed in Sections II.B-II.D and III.C. Also, new are Sections V.A and V.B, which present more traditional EDA applications of the architecture, e.g., using it for transistor sizing and building block identification. We argue that the proposed cognitive architecture represents a significant departure from the traditional, optimization and/or exploration-driven, design automation approaches for analog circuit design. We think that our work is conceptually more similar to the work by Lake, Salakhutdinov and Tenenbaum [36] in the sense that the method relies on the same computational pillars like theirs: “compositionality, causality, and learning to learn” [36]. However, that work tackles automated classification of visual objects, like handwriting, which is a very distinct domain than circuit design. While some of the discussed applications can be addressed with existing optimization algorithms too, we are not arguing that the cognitive architecture provides superior optimization methods as optimization is not its main purpose. The main benefit of the proposed work is in presenting a novel approach that mainly focuses on knowledge-centered design and reuse (e.g., using building blocks, design features and patterns, etc.) for tasks, like design comparison, causal reasoning for design understanding, incremental design refinement, design feature combination, and mimicking a certain design styles. Other knowledge-centered activities are possible too. We think that many of the shown algorithms can be further improved to better their design quality and effectiveness. The paper has the following structure. Section II discusses the formal model supporting the proposed architecture. Section III describes its structure and Section IV presents the related algorithms. Section V illustrates a set of design activities performed using the methodology of the cognitive architecture. Conclusions end the paper. II. F ORMAL M ODEL Fig. 1 summarizes the main elements of the model of the proposed cognitive architecture: (i) The architecture produces solutions through incremental evolution (transformations). (ii) Solutions are clustered in topologically-similar sets, each cluster having a unique kernel. All solutions in a cluster can be obtained from the kernel through incremental transformations. Hence, each cluster tackles an invariant set of tradeoffs (niche) and represents Pareto front fragments implementing the tradeoffs. The model includes five evolution strategies: evolution (1) using knowledge only from its

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

3

own cluster (strategy 1), (2) using knowledge originated in other clusters (strategy 2), (3) creating new clusters (niches) by combining features from different clusters (strategy 3), (4) using alternative features, hence by excluding an existing niche (strategy 4), and (5) through kernel aggregation, in which a new kernel and cluster result by aggregating the features of individual solutions (strategy 5). This section first summarizes the basic model and then presents the constrained model for robust divergence, the model architecture, and the characteristics of the constrained model evolution. A. Summary of the Basic Model A summary of the basic model supporting the cognitive architecture is presented next. The detailed description is offered in [29]. The model elements are as follows: 1) An attribute (feature) is the triplet (vars; rel; context), where vars is the set of attributes involved in the attribute description, rel is the relation between attributes, and context are the conditions (constraints) under which the relations hold. Each variable vi is defined over its domain Domi,1 × Domi,2 × ...Domi,k . 2) Concept C is the triplet: C =< Invariants,Uniqueness, Enabling >

(1)

C =< I,U, E >

(2)

or where each of the three elements is an attribute set. The elements are defined as follows: (I(C) = {Ai |Ai ∈ ∩∀Di ∈C Attr(Di )}) 6= 0/

(3)

where Di are the instances of concept C and Attr(Di ) the set of attributes of instance Di . The set indicates the attributes common to all concept instances. (U(C) = {Ai |Ai ∈ IC ∧ Ai 6∈ I(Ck ), ∀Ck 6= C}) 6= 0/

(4)

The set represents the attributes that are unique to concept C, hence distinguish it from the other concepts Ck in the knowledge representation. E(C) = {e|∃Ai ∈ Attr(C), s.t. e → contextAi }

(5)

It is the set of conditions under which the attributes in sets I and U hold. In addition, an attribute of a concept cannot be independent from the other attributes, ∀Ai ∈ Attr(C), ∃A j ∈ Attr(C) s.t. Ai 6= A j ∧ v j ∈ varsAi ∩ varsA j 6= 0/

(6)

Moreover, concept features can be partitioned into building blocks (BBs). BBs are non-overlapping sets of features, so that their variables pertain either to a single partition (internal variables) or to multiple partitions, so that the variables input in one partition are output by the other partition (interfacing variables). The consequence of the partitioning requirement is modularity. 3) A solution is a concept C that meets the requirements of a problem, hence produces a certain reward, at the expense of a certain cost.

Decoupling error Distance definition Acceptable similarity error Critical size

Mehods to find similarity and differences

Cost model Reward model

Causality prediction

Cost − reward model

Outcome prediction

Population of solutions

Incremental evolution operators

Solution evaluations (embodiment)

Critical diversity Amount of diversity

Amount of space that can be explored

Fig. 2.

Increment granularity

Domain knowledge structure

Emotions

Compare solution performance with predictions

Restructure domain knowledge

Accepable prediction error

Constrained model of the cognitive architecture

4) Each concept C representing a solution has a causal sequence indicating how its attributes serve in meeting the performance requirements of the problem or providing the context conditions of other attributes. A consequence of condition 2) is that concepts have a topology (structure) defined by the relations between attributes. Similarity defines matching relations between the variables of attributes [30], and are used to find the similarities and differences between building blocks and concepts [30]. The model is characterized by metrics (a) about the generated solutions and (b) about the evolution (transformation) process [13]. The first set includes performance bottlenecks, Constraining Factor (CF, flexibility reduction due to a feature), Variable Domain Modification (V DM, variable domain extension after concept combination), and Amount of Performance Improvement (API, improvement in problem requirements matching due to a concept combination). The second set includes metrics like flexibility (number of different features that can be combined with a concept), Expected Increase in Concept Structure (EICS, expected new concepts that are produced by a structure), and Concept Complexity Index (CCI, concept complexity measure). The remaining of the section presents the model extension for improving the robustness of solution evolution. B. Constrained Model The role of constraints in the formal model is to increase the likelihood of robust evolution of creative solutions to openended or ill-defined problems. Robustness is defined by the degree to which evolution converges towards effective solutions to such problems. Hence, robustness describes the capability of reaching efficient tradeoffs between the determinism and uncertainty of a problem. Tradeoffs are defined by the concept attributes of the solution. The likelihood of producing problem-satisfying solutions is described by the following equations: Likelihood(success) ≈ max (∃)Flexibility(Knowledge), s.t. |Requirements(Problem) − Per f ormance(Solution)| < ε ∧ Solution = Sequence(Knowledge, Operator set) ∧ min Complexity(Solution)

(7)

hence, the existing knowledge has sufficient flexibility to support creation of solutions that meet the problem requirements and have minimal complexity. Solutions are created by

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

4

applying a sequence of operators to the existing knowledge. New knowledge and operator sequences might be created during the process. The equation states that there are sufficient knowledge and available procedures to create solutions that are of minimum (reasonable) complexity. Solutions are produced using four operators that create feasible solutions by adding or replacing building blocks to address the problem requirements, or by changing the structure of an existing building block through variation or analogy with existing building blocks. The five strategies in Fig. 1 are used. Operators are selected so that they improve performance (i.e. reduce the distance of the solutions to the requirements) and/or increase the variable domains of the solutions, hence reduce the constraining of the solutions while the new solution remains within distance T h from the requirements: Solutionnew = Sequence(Solution, Operatori ), s.t. (|Requirements(Problem) − Per f ormance(Solutionnew )| < |Requirements(Problem) − Per f ormance(Solution)|)∨ (8) (V DM(Solutionnew ) > V DM(Solution)∧ |Requirements(Problem) − Per f ormance(Solutionnew )| < T h)

The constraints of the formal model help the implicit evaluation of the model metrics during solution evolution. They aid a more tractable problem-solving process. Using the metrics, the equation (7) is restated as follows: Likelihood(success) ≈ max ∪i EICS(Ci ),Ci ∈ Knowledge, s.t. |Requirements(Problem) − Per f ormance(Solution)| < ε∧ Solution = Sequence(Knowledge, Operator set) ∧ (|Requirements(Problem) − Per f ormance(Solutionnew )| < (9) |Requirements(Problem) − Per f ormance(Solution)|∨ V DM(Solutionnew ) > V DM(Solution)∧ |Requirements(Problem) − Per f ormance(Solutionnew )| < T h) ∧ minCCI(Solution)

Thus, the overall EICS value for concepts Ci in the knowledge structure (and the related Pareto front fragments) should be maximized through sequences of transformations involving the five cases in Fig. 1 and with minimum complexity of the solutions on the front fragments. In addition, there are enough resources available to generate the solution, e.g., to support search (reasoning) until finding a solution. The model’s deterministic elements include the building blocks (BBs) of the concepts in the knowledge structure, their incorporated attributes (features), the operators used to produce new solutions, and the conditions that select the candidate options for the evolution process. The unknowns include deciding the actual operators as well as the actual features used in creating new BBs or blocks which are added or replaced. The proposed constraints aim to increase the robustness of the evolution process, so that equations (7)-(9) are more likely to be met. Equation (9) is addressed through the incremental evolution process summarized in Fig. 1. The process expands current niches, including their Pareto front fragments, and creates new niches and kernels using the five evolution strategies in Fig. 1. Then, the maximization goal in equation (9) becomes: max ∪i EICS(Ci ),Ci ∈ Knowledge ≈ max ∪ j EICS(kernel j )

(10)

Hence, EICS of the concepts in the domain knowledge representation is approximated by EICS of the kernels of the solution clusters that were identified. max ∪ j

E[max ∪ j EICS(kernel j )] ≈ E[(EICS(Seqk (Cm )))], s.t. Reward > T hR (11)



k,Cm ∈cluster j

The expected maximum EICS of the kernels is the reunion of the Pareto front fragments of all clusters j (corresponding to kernel j ) created by applying sequences Seqk of the five operators to concepts Cm in a cluster. Reward models the usefulness of continuing the incremental evolution of a cluster. It has three elements: (i) the distance between problem requirements and solution performance and the solution constraints relation in equation (9), (ii) the effort (resources) available to the cluster (niche) to further conduct incremental evolution, and (iii) the benefits produced by a niche for another niche (e.g., the created BBs are also used by another niche). T hR is a lower bound. Equation (11) was recast as the following two equations: max ∪ j j



E[Reward(Seqk (Cm ))] ≈ max ∪ j

k,Cm ∈cluster j

j



k,Cm ∈cluster j

E[|Requirements(Problem) − Per f ormance(Seqk (Cm ))| < ε], s.t. min ∪m,k E f f ort(Seqk (Cm ))

(12)

The equation states that the expactation to meet the problem requirements should be achieved using minimum effort. max ∪E j [reusable variety(cluster j )]

(13)

The equation corresponds to previous aspect (iii) and states that the reusable variety of all clusters should be maximized. The above equation states that incremental evolution originates dynamic conditions (e.g., equilibrium conditions) between top-down constraints introduced by kernels and bottomup constraints defined by BB variations. C. Architecture of the Constrained Model Fig. 2 presents the structure of the constrained model of the cognitive architecture. Note that the model incorporates a set of parameters that are adjusted depending on the current activity: critical number and critical diversity of the solution population, increment granularity for incremental evolution, acceptable similarity errors for the solution matching procedure, acceptable decoupling error for outcome prediction, and threshold of correct prediction. Model parameters are critical for the robust evolution of creative solutions. However, it is difficult to explicitly decide the parameter values that produce robust evolution as well as the relations between parameters. The model constraints set the parameters through an implicit process. Moreover, many combinations of parameter values are either infeasible or equivalent with respect to the solutions they generate. Constraints and the cognitive architecture attempt to reduce such situations. The cognitive architecture in Section III includes a scheme for setting the values of these parameters. The model constraints are grounded in the following principle that supports more effective solution outcome prediction. The principle (called orthogonal - quasilinear - saturation

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

5

assumption) is grounded in observations in cognitive psychology [37]. It has three components: • Orthogonality refers to the fact that solution characteristics (e.g., functionality and performance) can be related to concept attributes (features), e.g., building block parameters. Hence, the purpose of the building blocks can be decided, or their presence in a solution can be justified. Thus, the causality of building blocks is decidable, such as the causality between building block parameters and changes in functionality and performance. • The quasilinear element refers to the approximately linear dependency between concept features changes and performance variations for certain conditions. • Saturation indicates the concept feature conditions beyond which the quasilinear assumption fails. Thus, it shows the ranges of the current solution front. The other model constraints describe two aspects, solution clustering and incremental evolution of solutions. Constraints also incorporate elements to implicitly implement the orthogonal - quasilinear - saturation principle. 1. Similarity-based solution clustering. Niche emergence. Clustering. Solutions are clustered based on topological (structural) similarity. Given cluster C , there is a set of concepts D and a set of connections between the concepts in set D, so that any solution in cluster C can be obtained by transforming the connected concepts in D. The connected concepts are called the kernel of the cluster. Hence, (∀) Cm ∈ cluster(kernel j ), (∃) Seqk s.t. Cm = Seqk (kernel j ) (14)

Clusters sample the solution space, each cluster representing a certain set of performance tradeoffs. Tradeoffs create the solution niche occupied by the cluster. A cluster is dominated by another cluster, if all its tradeoffs are worse than the tradeoffs of other clusters. The Pareto front is formed by the set of clusters that are not dominated. Each kernel or concept is described by its causality sequence: var1 → SPer f1 |contextvar1 ; var2 → SPer f2 |contextvar2 ...

(15)

where SPer fi is the set of performance causally controlled by parameter vari under the conditions defined by contexti . The variable earlier in the sequence are deemed to have a higher priority than the latter parameters. Hence, a causal sequence defines an ordering in which the performance parameters of the tradeoff of a cluster should be tackled. Comparison. Two solutions of the same cluster are compared with each other using the following procedure (differential analysis): (1) Find the set of transformations Seq1 and Seq2 that generate each of the two solutions starting from the common kernel. (2) For each attribute of the two solutions find the common sequence of transformations with the other solution. (3) Match the features of the two solutions, so that their common sequence of transformations is maximized (max Seqc ; Seqc ⊂ Seq1 , Seq2 ). (4) Compute the topological distance between pairs of matched attributes as the reunion of the disjoint transformation sequences of the attributes (Dist = (Seq1 − Seqc ) ∪ (Seq2 − Seqc )). Solution similarity is

Combination within the same cluster distinguishing features

Combination across two clusters Solution in Cluster 1

matched

Solution in Cluster 2

matched

Kernel for Cluster 3 distinguishing features

matched Solution 1

Solution 2

Solution through direct feature combination

Solution through BB variation

Solution through BB analogy

Solution

(a)

Fig. 3.

(b)

Concept combinations for incremental evolution (transformation)

the common part. (5) Correlate the topological similarities and distances to the solutions’ performance. The procedure is based on concept unification, a well-known activity in cognitive psychology [38], [39]. The similarity of a cluster is the intersection of the similarities of all solution pairs of the cluster. The diversity of the cluster is the set of distances between all pairs. The values under which all conditions contexti are valid are part of the belief system and are found using the matching algorithms presented in [30]. A consequence is that if a solution is an instance of a kernel then there is a matching between the parameters of the two sequences, so that parameters have the same ordering in the sequence and realize the same kind of effect on the performance parameters. 2. Incremental evolution of solutions. Solution combination. The attributes of two solutions are combined using the following rules. The discussion considers that concepts are described as data/signal processing flow graphs, as in [30]. Fig. 3 illustrates the next situations: 1) The solution attributes modified during combination are those that produce the most significant improvement of the required performance or relaxing of the variable domains. This is achieved by storing the causal relations vari → Per f j |contextvari (equation (15)) of each attribute for the solutions in which it has been used before. The causal relations are part of the predictors used in estimating the expected solution performance. 2) If the two solutions pertain to the same cluster then the following constraints must be met (Fig. 3(a)): (i) The solution includes the instantiation present in one of the solutions for the common part. (ii) The distinct attributes are present only if their enabling conditions are met by the rest of the attributes and they are justified by improving performance or flexibility, i.e. relaxing the requirements of the solution. The figure shows three different kinds of solution combinations (different colors indicate how BBs are combined in the results).

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

6

kernel

performance tradeoffs

1

...

solution predicted to solve the problem

gradual disassembling

improvement in rewards

performance tradeoffs

EICS solution with least impact variety

uncoupling of the tradeoffs of interest linear prediction for the uncoupled causal variables based on previous solutions

kernel cue associated to bottleneck identify bottlenecks

Problem requirement

BB variety

k

divergent evolution

Filtering BB Amplification BB

Solution front (explored solution space) (Pareto front fragment)

Compensation BB ...

Fig. 4.

Divergent evolution

3) If the two solutions are in different clusters, a new kernel is generated from the BBs of the solutions (Fig. 3(b)). The new kernel corresponds to a new cluster for a different niche. Colors show how BBs combine to form the new kernel and one of its related concepts (solution). In a top-down flow, abstract features combine to first form the kernel, followed by instantiating it into solutions. In a bottom-up flow, physical features combine to produce the solution of a new cluster, followed by creating its kernel through abstraction. Constraints (2) and (3) help that new solutions are feasible. D. Characteristics of the constrained model evolution The following elements characterize the constrained model: Multiple representations. Having multiple knowledge representations is a consequence of clustering using differential analysis and causality. Representations are at different level of granularity depending on the kernel of the cluster, represent associative structures depending on topological similarity, or describe causal representations that give insight (justification) of how the connected building blocks of a solution meet the requirements of the problem. Hence, concepts and attributes are organized hierarchically, and implicitly introduce relationships like synonyms, homonyms, and antonyms. A discussion of the knowledge representation structuring for different open-ended and ill-defined problems are presented in [40]. Knowledge structuring is also discussed in [41], [42]. Robust divergent evolution. The second issue relates to achieving robust topological divergence, e.g., new solution topologies are created, so that they pertain to a niche of the problem requirements and their new attributes (features) are reusable beyond the current niche. Incremental evolution is achieved through the five strategies in Fig. 1. Divergence is spurred by two situations: (i) the current solutions cannot meet the problem requirements and (ii) there is a change of the problem requirements, i.e. a gradual change of the importance of individual performance attributes, or a merging of previously considered independent requirements. As a result, new cluster kernels are produced, hence new niches emerge. Fig. 4 depicts divergence during evolution. The evolution process creates alternatives for the building blocks with the least impact on the performance requirements of the current

niche (hence, the block of least causality), including new building blocks produced through variation or analogy of existing blocks. The situations are explained by the fact that the least causal blocks are the most flexible, given that the resulting solutions should be feasible. This process is similar to null spaces in regulatory biological structures [43], where redundant structures incorporate large topological variety. Robust divergence relates to creating enough building block variety to generate requirement-satisfying solutions. Divergence applies the existing BB variety to the first solution predicted to solve the problem, e.g., the least abstract solution without the tradeoffs defining the current niche (e.g., the bottlenecks of the solutions associated to the niche). This selection process mimics gradual disassembling of the solution, from the least to the more important blocks. The expected increase of the concept space (EICS) from the concept is likely to include a solution to the problem, if such solutions are reachable with the current knowledge and resources. The correctness of EICS prediction is improved by uncoupling the causal parameters of the tradeoffs and by the linear dependency of the solution performance on the causal parameters. The evolution of a cluster stops in two conditions: applying the four operators does not further improve the solution performance or the rewards associated to the problem are exhausted. In the first case, the bottlenecks embedded in the kernel cannot be further tackled to generate superior solutions. Rewards are reduced by the cost spent to devise new solutions, hence each solution transformation encumbers a certain cost. Every kernel and its related cluster forms a niche of the evolutionary process through the solution space. The solutions obtained by transforming the kernel using the four operators are valid. Hence, each cluster is a collection (repository) of BBs available to tackle the requirements of a problem. Every niche carries a reward representing the resources available to conduct the evolution process. The emergency of new niches can be through two approaches: gradual change of the problem requirements until a new problem and niche emerge, i.e. in open-ended problem, and merging of previously unrelated requirements, e.g., in ill-defined problems. Constraints increase the robustness of the evolution process as expressed by equations (7)-(8). The justification is as follows. (i) The four operators applied under constraints (2) and (3) help producing mainly feasible (working) solutions. (ii) Moreover, solution clustering for different performance niches helps maximizing the diversity (variety) of the available building blocks, e.g., the likelihood of developing different structures and building blocks. (iii) The orthogonality - linear - saturation principle aids predicting the features that are likely to improve performance or relax the performance tradeoffs embedded in a solution. It is similar to a set of piecewise-linear performance approximations. It also supports a more correct identification of orthogonal causalities, i.e. solution features that need to be addressed during evolution. A concept’s flexibility in generating new solutions (equations (7) and (9)) is approximated by concept typicality and versatility: Flexibility(C) ≈ Typicality(C),Versatility(C)

(16)

Note that this equation is similar to Bayesian inference, con-

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

7

sidered to be a good model for human decision making [44]. III.

C OGNITIVE A RCHITECTURE

Fig. 5 presents the cognitive architecture (called InnovA) based on the constrained model discussed in Section II. Knowledge is organized as three components in module semantic memory: associative structure, connections to goals, and causal sequences (justifications). The associative structure clusters concepts based on the similarity of their features. Each concept is an abstraction of its refinements. Cost functions used in clustering analog circuit design knowledge were discussed in [30]. Multiple associative representations result depending on the similarity tightness (error) used in clustering. Connections to goals indicate associations between concepts and specific requirements, e.g., the concept likely having a main impact on requirements. Connections are important in introducing causal relations for specific solutions. Causal sequences justify the using of specific BBs and BB connections in creating a solution, i.e. the way in which BBs relate to each other and the solution tradeoffs that are tackled when solving a problem. The architecture has two parts: the part for objective learning and reasoning (the modules shown with continuous line in the figure) and the part for subjective learning and reasoning (the modules presented with dashed line). The cognitive architecture instantiates the three modules of the semantic memory as part of three memory subsystems: global memory system, subjective memory, and context-dependent memory. Global memory system includes the entire knowledge to which the cognitive architecture has been exposed, like the knowledge that was pre-programmed into the architecture or learned during operation. Like in other cognitive architectures [16]–[18], the memory system is organized as long-term memory (all associative structures, all connections to goals, and all causal sequences), short-term memory (the knowledge accessed for the current task), and episodic memory (all solution outcomes for specific problems). The subjective memory module includes the learned knowledge for effective problem solving, including the learned heuristics for a more efficient traversing of the solution space. The subjective memory has the following three parts: (1) Beliefs are truth values likely to increase the effectiveness of reusing the available knowledge in solving a new problem. Beliefs are learned dynamically during operation. Belief formation represents an inference process that finds the conditions under which the interpretation of the causal sequences of solutions is correct [45]. Beliefs are ranked based on their strengths. The process represents unsupervised learning. The consistency of beliefs (e.g., if beliefs contradict each other or not) is a measure of the correctness of prediction, hence, surpassing certain thresholds triggers the need for belief modification and knowledge restructuring. (2) Preferences indicate the priority in selecting a features or a concept from a set of approximatively similar alternatives to be incorporated into a solution. Preferences define an ordering of alternatives. They result through unsupervised learning. Preferences also include social aspects, like prestige of others, outcome importance to others, and preferences of others [37].

(3) Emotion module mimics the using of human emotions in decision making and problem solving [46], [47]. The module associates the degree of matching and the nature of mismatches between outcomes and problem requirements to fixed set of bins (analogous to emotions). The module also learns social emotions, e.g., importance (usefulness, novelty) for others, problem population-level requirements, etc. Learning is similar to supervised classification as the set of available emotions is fixed. Emotion module is correlated to the preference memory. Context-dependent memory stores instances of the subjective memory for a given problem. Certain beliefs, priorities, and emotions are selected depending on the specific context. Selection uses the previous contexts similar to the current situation. Objective learning and reasoning part has the next modules. Population of solutions module are the solutions that are under development for the current problem, including their BBs, connections to the problem requirements, and causal sequences. The population contains the current Pareto front for the problem as well as the solutions, features, and steps utilized to obtain the front. A population subset controlled by Attention window module is stored in the short-term memory. The population of solutions does not have a cognitive equivalent. Understanding needs module utilizes causality information and unmatched requirements to decide which features and BBs need to be tackled next by the problem solving process. The process generates a cue that is used together with contextdependent memory to access the long term memory to retrieve useful features and BBs. Possible cues pertain to a cue hierarchy: (i) constraints on the features (parameters) of the BBs, (ii) orderings among constraints (i.e. preferences, priorities), (iii) causality aspects, i.e. how features relate to requirements, and (iv) constraints on the priority of the requirements that must be tackled. More concrete cues are more specific to the problem, more abstract cues are more suitable for being reused. The concept of pattern search discussed in [48] can be interpreted as an example of constraints on parameter values and ordering among parameter constraints. The module to produce alternatives using incremental transformations through operators utilizes the current population of solutions and the four operators to create new solutions. As explained in Section II, the new solutions are more likely to be feasible because of the constraints of the underlying model. Select alternatives module picks from the set of possible alternatives a solution to be further developed by the process. Selection uses the two prediction modules: predictions about causality to understand how the new design features address the limitations of the current design and predictions about outcomes module that estimates the expected outcomes of the solutions (functionality and performance). Selection uses different matching criteria between predictions and requirements, like to maximize the reward, to maximize the difference between reward and cost, and to minimize cost. The criteria correspond to different cognitive selection strategies [37]. Simulation (embodiment) offers a precise (yet more computationally cumbersome) evaluation of the solution functionality and performance, e.g., using a standard circuit simulator. Circuit simulation offers knowledge embodiment into the real world, a well-known issue in cognitive architectures [49].

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

8 Part for objective learning and reasoning Memory system Part for subjective learning and reasoning

Subjective memory Causal sequences (justifications)

Connections to goals

Associative structure

Beliefs

Preferences

Emotions

Context−dependent memory

Context−dependent beliefs

Attention window

Population of solutions

Identify new BBs

Fig. 5.

Context−dependent preferences

Predictions about causality

Produce alternatives using incremental operations

Context−depedent emotions

Predictions about outcomes

Select alternatives

Simulation (embodiment)

Understanding needs

loop (iii): decide the abstraction level of the used BBs (concepts) (cues about concepts)

Semantic memory

loop (ii): find BBs that can solve the bottleneck (cues about BBs and their connections to goals)

Short term memory

Episodic memory

loop (i): identify bottleneck by comparing current and previous solutions (find cues about features causing the bottlenecks)

Long term memory

Knowledge restructuring

InnovA: cognitive architecture for circuit design innovation

Significant discrepancies between circuit simulation and the two prediction modules triggers knowledge restructuring. Discrepancies also refer to the consistency of beliefs, causalities, feature matchings, clusters, and predictions. Restructuring reorganizes the associative memory structures, connections to goals, causal sequences, beliefs, and priorities. The model architecture instantiates the operator sequence Seqk in equation (12) following one of the following five types of possible reasoning types (Fig. 1). The corresponding algorithms are shown in Section IV. The likelihood of selecting a certain sequence Seqk or a reusable variety for other clusters is predicted by the typicality and versatility of BBs. This is similar to utilizing previous experiences in current decisions. Research shows that simple Bayesian inference models to a certain degree human decision making [44]. The architecture realizes model constraints (12) and (13) on incremental evolution as follows. As explained, the purpose is to minimize the distance between problem requirements and solution performance and maximize the variety of solution features at the expense of minimal evolutionary effort. These objectives are achieved through an architectural mechanism to (i) bridge the gap between clusters (niches), (ii) select the abstraction level at which features and BBs are combined with each other, (iii) produce alternatives for a concept, and (iv) fixate new solutions as distinct clusters (niches). The structure and parameters of the mechanism are discussed next. Incremental evolution selects solutions with short distance to problem requirements, high gradient towards problem goals, and higher dissimilarity with existing solutions as long as

there are sufficient resources for the cluster (niche). Resources decrease as new solutions are created and increase depending on the produced feature variety and performance improvement. The parameter adjustment mechanism of the architecture is shown in Fig. 5. Each BB j is characterized by a set of causal (k) (k) (k) relations var j,i → SPer f j,i |context j,i and their priorities in tackling a set of problems Problemk . (i) The inner-most loop identifies the BB causing the bottleneck of the solutions of the current Pareto fragment of the cluster. It compares the solution under investigation with previous solutions to find possible BB candidates for creating the bottleneck. As a byproduct, the step generates information about features to be avoided in future solutions. (ii) The second loop finds BBs that could address the bottleneck. These BBs are generated either through variation of the candidate block, alternative blocks from the same cluster, or alternative blocks for different clusters. (iii) The third loop decides the abstraction level at which alternative BBs are considered, hence the hierarchical level of divergence. Note that loop (ii) implements a bottom-up creation of BB variety and loop (iii) produces a top-down enforcement of constraints. The equilibrium between the two loops is decided by the adjustment mechanism. The pressure to change a BB at a higher abstraction level (hence, more important causal relations) increases when the niche resources decrease (e.g., 1 ) and current level variations do not produce sig∼ Resources nificant performance improvements. For loop (ii), incremental modifications correspond to variations of a current BB, a BB of a different solution of the same niche, and a BB from a different niche. For the last case, the causal sequence is

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

9

Algorithm 1: Identifying circuit building blocks

1 2 3 4 5 6 7 8 9 10

Input: analog circuit C, set of known building blocks Γ Output: set of building blocks in C set of building blocks Φ = 0; / find the feedforward and feedback signal paths of C; identify main circuit Cm of C by including transistors along feedforward paths and biasing transistors; feedback circuit C f = C −Cm ; Φ = find all building blocks in Cm and C f separately by isomorphic matching with Γ; exclude ambiguous building blocks in Φ in case of false justifications; exclude building blocks in Φ that are subsets of bigger building blocks while allowing overlapping of partially shared transistors; Identify generic templates in Φ formed of building blocks that are repeatedly used and connected to form larger blocks; construct the hierarchical structure of C with Φ; return Φ

Algorithm 2: Classification of circuit design features

1 2 3 4 5 6 7 8 9 10 11

12

maintained if the benefit of the solution exceeds the cost. Otherwise, a new causal sequence is produced depending on the bottlenecks to be addressed. The degree to which robust divergence is achieved depends on the availability of useful BBs, the efficiency (e.g., speed) of evolving missing BBs, and the capability to connect them in a solution. Actually, incremental evolution must maximize the likelihood of creating feasible solution starting from partial knowledge. This information is stored in the episodic memory. Therefore, a higher priority is given in combination to BBs that have higher number of synonym, homonym, and antonym BBs as well as BBs with higher variability of their implementations. Note that preference is not necessarily given to the more frequent BBs but to those with a better characterized semantics. This strategy allows filtering out causal features that cannot tackle the tradeoffs of the niche. The features of the concepts introduce a higher variety of causal relations to the causal sequence of the kernel, hence, the ones that do not improve performance will correspondingly prune the solution space. The evolution process decides the parameters of the cognitive architecture and implicitly of the constrained model in Section II. The next section presents the algorithms related to the modules shown with a thick line. For the modules shown in dashed line (subjective memory and restructuring), the algorithms are still under development. IV. A LGORITHMS OF THE C OGNITIVE A RCHITECTURE This section discusses the algorithms of the cognitive architecture modules shown with continuous line in Fig. 5. A. Identifying building blocks. Algorithm 1 executes five steps to find the building blocks (BBs) in circuit C: (1) first, it finds the signal paths of circuit C and identifies the main part and the feedback part. It finds all feedforward and feedback signal paths. Then, based on the signal paths, the main and feedback parts are separated. BBs are searched in each subcircuit using an isomorphism matching with BB library Γ. (2) It solves any ambiguities by excluding unjustified BBs. For example, if a BB requires the transistors to operate in

Input: set of circuits C, problem specification Σ Output: classification of features Λ of circuits in set C for all pairs of circuits C1 , C2 ∈ C do find common and distinct sub-structures of C1 and C2 ; compute common and distinct electrical behaviors of C1 and C2 ; characterize impact of common, distinct sub-structures on electrical behaviors with respect to Σ; combine features of C1 and C2 for Σ; add all found features to Λ; for all matched nodes s with output arcs to unmatched nodes do for all matched nodes t with input arcs from unmatched nodes do if ∃ signal path passing s and t then create abstraction for an arc from s to t, add to Λ; create new features by exploring the instance space of the abstraction, add to Λ; return classified features Λ;

saturation while the ones in the circuit are actually in linear region, they are unjustified and excluded by the algorithm. (3) The step identifies overlapping of BBs and removes BBs that are subsets of larger BBs. Partial sharing of transistors by BBs is allowed, while full inclusion of a BB in larger BBs is not. Therefore, BBs that are subsets of other building blocks are excluded. (4) The step finds templates that are formed by repeatedly connecting the same BBs. If there are structures formed by the same connections of the same kind of BBs, they are identified as templates. (5) It constructs the hierarchical structure of the circuit to represent the used combinations of BBs. The algorithms for supervised and unsupervised finding of the BBs in a circuit are discussed in [50]. Other BB identification methods are discussed in [51], [52]. B. Algorithm to classify design features. Classification of the circuit design features uses four operators: (i) circuit comparison, (ii) circuit instantiation-abstraction, (iii) concept combination and (iv) design feature induction. Algorithm 2 shows the implementation of the four operators. Circuit comparison and concept combination are used to find the existing features and to create new features for all pairs of circuits. Circuit comparison can use the symbolic technique in [30] or numeric simulation with each circuit being simulated once. For the identified signal flow paths, the method creates abstractions between two matched nodes for distinct signal paths using the instantiation-abstraction operator. The design feature induction operator generates new features by exploring the abstraction’s instance space. The classification techniques used in circuit design are detailed in [32]. The execution time is around 26 minutes for a ten circuit set. C. Mining causal design sequences. Mining causal design sequences for a certain circuit includes two parts. The first part mines the design sequences of circuit features, such as circuit BBs. It gives the BB structure of the circuit. The second part extracts of sequences of BB combinations for problem requirements Σ. The causal information is found during the

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

10 VDD M0

VB1

M1

M2

Algorithm 3: Mining causal design sequences Input: circuit C, set of all circuit features Θ, problem requirements Σ Output: causal design sequences seq that meets Σ set of initial features S = features in Θ by the author of C or by the author’s group; add features in S to design sequence seq; Θ = Θ − S; for features x ∈ Θ from more concrete to more abstract features do if x is not justified by features in S if added to seq then add x to design sequence seq;

1 2 3 4 5 6

identify common features and new design insight by the authors in S; based on problem requirements Σ, compute causal relations CR of transistors parameters; compute orderings of parameters depending on different criteria; add the computed orderings to seq; return seq;

7 8 9 10 11

Algorithm 4: Reasoning alternatives

1 2 3 4 5 6 7 8 9 10 11 12 13

Input: problem requirements Σ, associative circuit set C Output: final circuit design cir exclude all unwanted features from C; set of current trade-offs ∆ = Σ; cir = selected circuit from C with features close to Σ; update ∆ to exclude satisfied requirements; while ∆ 6= 0/ do find circuit C0 ⊂ C with features that could address ∆; if such circuit C0 is not found then find a circuit C0 with physical or abstract features that could be justified for new abstractions that address ∆; if C0 is not found then return cir with unsatisfied trade-offs ∆; cir = combine features in circuit C0 with cir; update ∆ for new trade-offs; return cir;

second part based on the device importance in achieving the problem requirements. A detailed discussion is offered in [33]. As shown in Algorithm 3, the method finds the initial features in designing the circuit, defined as starting ideas. Then for each remaining feature, if it is causally justified, it is added to the sequence. Otherwise, it is added to update the starting ideas. In the end, the sequence contains the design steps on how each feature is justified (either improve performance or relax constraints) to complete the design. The second part of the method finds the causal relations of the circuit based on problem requirements Σ. The parameters are ordered based on different considerations, like the overall linearity of their control over performance, their correlations to other parameters, and similarity of their causal relations. These orderings are also added to the design sequence for sizing steps of the circuit. D. Reasoning alternatives. The five reasoning strategies (Fig. 1) depend on the kinds of the starting features used in creating a solution: (1) combining physical features, (2) mixing

VDD M12

M13 VB2

M3 Vinp

M4

M7

Vinn

M8

M6

Voutn

M9

M11

Vgm+

M5

M9

M7

Voutp

M10 Vgm-

M6

VB3

M10

M11

Voutn

M12

M3 Vinp

M28 M13

VB4 M14

M15

M26

VSS

Fig. 6.

Vcmfb M27

M8

M5

M4 M23 R1 M24

M25 R0

Vcomp

Vinn

Vcmfb M22

Voutp M21

M20

M19

VSS

Schematics of discussed circuit

physical features and abstract features, (3) combining abstract features, (4) excluding certain features, and (5) novel abstractions based on existing physical features. Algorithm 4 shows the reasoning procedure for the five types of starting features. The first step excludes all unwanted features that may impose undesired effects on the performance of the final circuit. If there are such unwanted features (e.g., alternative (4)), they are excluded. Then, the initial circuit is selected from the associative circuit set. If there are unsatisfied trade-offs, another circuit is selected to address them. Physical features and abstract features are all taken into account while making the selections, and tradeoffs are updated each time a new circuit is generated. If no such circuit is found, it means a new abstraction from the existing physical features is needed as a starting feature (i.e. alternative (5)). A final circuit is put out if all requirements are satisfied. However, it is possible that the algorithm fails to locate features to address unsatisfied tradeoffs, then the final circuit is put out with unsatisfied tradeoffs. Details of the algorithms are offered in [34]. V. A PPLICATIONS This section discusses the main capabilities of the proposed cognitive architecture as compared to traditional CAD methods for analog circuit design: using causal relations in circuit design optimization, finding new building blocks, and circuit topology creation through feature combination and reusing. Circuit features are stored in knowledge representations organized using feature similarity at various levels of abstraction. A. Using causal relations in circuit design. This case study shows how Predictions about causality module in Fig. 5 is used by Select alternatives and Understand needs modules of the architecture. The discussion refers to the circuit in Fig. 6(left). Causal relations in design optimization. We used the causal sequences of a circuit design to guide parameter optimization during transistor sizing. Circuit sizing used Cadence Virtuoso 6.1.5 tool [53]. Calculating the causal sequences for a circuit is fast, less than 100 seconds. In addition, time is spent to find the design points used in computing the causal sequences, e.g., around 30 minutes per point, if Cadence tool is utilized. For a circuit with N device parameters, the total time for collecting the design points is about N × Nrsamples × 30 minutes, where Nrsamples is the number of samples for each parameter. The following method resulted for using causal relations in conjunction with Cadence tool: For a certain causal sequence (computed using Algorithm 3 in Section IV), the parameters of the first transistor in the sequence are sampled more intensively than the rest. It is because these transistors parameters have

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

11

TABLE I.

O PTIMIZATION RESULTS FOR THE CIRCUIT IN F IG . 6 ( LEFT )

TABLE II.

O PTIMIZATION RESULTS FOR THE CIRCUIT IN F IG . 6 ( LEFT ) WITH ADDITIONAL NOISE REQUIREMENT

performance simulations gain (dB) bandwidth (kHz) THD (%)

8h run 3452 71.65 8.441 6.753

sweep M7 2592 72.34 2.667 8.24

sweep M10 2569 63.97 9.858 9.732

sweep M6 1684 67.16 8.857 12.38

sweep M3 457 53.96 73.46 13.66

a higher impact on the performance attributes of interest. The sampled value of the first parameter is kept fixed, while then the other parameters are found using the optimization process of Cadence tool. The process is repeated for each sampled value of the first transistor. The parameter values for the best solution are saved. Then, the parameters of the second transistor are swept while keeping the first transistor unchanged. Then, the parameters of the second transistor are saved, and so on until the entire sequence was traversed. The transistors that are not in the sequence are swept less than the ones in the sequence. As a stopping criterion, optimization runs up to a certain time limit or until better solutions are not found for a fixed number of exploration steps. The procedure gives a higher degree of flexibility to exploring the more important parameters in the causal sequence (i.e. the parameters at the beginning of the sequence), while the latter parameters in the sequence are explored under the constraints set by keeping fixed the values of the more important parameters. A tighter integration of a causal sequence and parameter exploration can be devised, if the code of the transistor sizing tool is available. Let’s consider the following causal sequence of devices M7 → M10 → M6 → M3. The time limits for exploration were set for the four transistors as follows: four hours for M7 , two hours for M10 , one hour for M6 and half an hour for M3 . Different time limits were used for sweeping the parameters of the four devices according to the strategy discussed in the previous paragraph. The results were compared with an optimization run (using Cadence tool) that sweeps all transistors simultaneously with a time limit of eight hours. The optimized performance set included gain, bandwidth, and linearity (THD). The multiple performance requirements were simultaneously tackled using the method of the Cadence tool. The results are shown in Table I. The optimization process that ran for eight hours stopped after 3452 steps because it was not able to find better solutions. For using the causal sequence, each column shows the results at the end of the corresponding sweeping, e.g., the last column is for sweeping M3 . There is significant improvement in bandwidth performance with acceptable reduction in gain and THD performance. Conceptually, the optimization process using causal sequences can be interpreted as follows: The first parameter of the sequence (e.g., M7 ) is given the highest flexibility, as it has most impact on performance. Hence, it is sampled most exhaustively by allocating it the highest chunk of time. Next, the sizes of device M10 are sampled for a shorter amount of time, as it has less impact on performance than device M7 . The size of M7 is kept fixed in the second step. The second step performs a post-optimization of M10 after deciding the value of the causally more important device M7 . The same reasoning applies for the remaining devices of the causal sequence too. Causal sequences in incremental optimization. Next, for the circuit in Fig. 6(left), we added noise as an additional

performance simulations gain (dB) bandwidth (kHz) THD (%) noise (RMS ampl.)

8h run 2728 70.79 9.35 7.017 61.1

sweep M10 2051 65.01 2.741 9.323 66.75

Combine topological features

sweep M3 2679 70.02 21.11 7.493 50.87

Topological feature Selection

Topology synthesis Design refinement

Fig. 7.

sweep M7 1049 63.44 11.68 7.819 31.29

Topology Selection

sweep M6 896 67.33 32.2 14.6 31.5

Topological feature + Topological feature Topology + Refinement

Topology synthesis: feature combination and topology refinement

performance requirement to the previous circuit optimization problem. Causal information was used in a similar way as in the previous case. The causal next sequence was utilized: M10 → M3 → M7 → M6. Note that devices have a different importance (priority) in this case. The results are shown in Table II. As in the previous case, there are relevant increases in bandwidth at the expense of acceptable gain and THD reduction. There is significant noise reduction too. The optimization processes without using causal information is likely to have reached a local optima into which it got trapped. Causal information allows a broader exploration of the parameter values, in which parameter consideration is adapted to their relevance to the problem. B. Building block identification. The circuit shown in Fig. 6(right) is used as an example for identifying BBs. First, all signal paths were found. For instance, there are multiple signal paths from input Vinp to output Voutn including Vinp → M23 → M7 → M4 → Voutn , Vinp → M23 → R1 → R0 → M25 → M6 → M7 → M4 → Voutn . There are ambiguous BBs that need to be excluded, i.e. cascode current sources (CCS) M12 +M7 , M11 +M6 , and M10 +M5 . This is because M5 , M6 , and M7 operate in linear region due to transconductance tuning (TT). The requirement for operation in the linear region must be specified as an input. Therefore, the three CCSs are unjustified and should be excluded. Instead, M10 , M11 , and M12 are individually identified as basic current sources (BCS). Differential input (DI) M23 + M25 shares transistor M23 with source degeneration (SD) M23 +R1 and M25 with SD M25 +R0 . Because BB overlapping is supported, both BBs are identified. BCSs M10 , M11 , M12 , and M13 are connected to form a series of topological structure that provides same functionality. Thus, the four BCSs are identified as a template called BCS+ that consists of all four transistors. The circuit is composed of all BBs. Its hierarchical structure was built by connecting BBs from top to bottom: BCS + TT + DI + SD + CCS. The identified BBs are manually reviewed as the tool is still being validated. The execution time was around 32 seconds. It increases with the number of signal paths in the circuit. C. Creating new topologies (structures). Topology synthesis creates new circuit topologies for a set of performance requirements and constraints. Topology synthesis includes two situations shown in Fig. 7: new topology devising through existing feature combination and topology refinement. With respect to the cognitive architecture in Fig. 5, the case study

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

12 VDD

VDD Vdd

M0

M0

Vdd

M1

Vdd

M2

VB1

M1

M12

M0

M13

M16

M1

VB3 M13

M12

M14

VB1 M2 M2

M3

M3 Vout

M3

M12 M5

VB

VB2 M15

VBN

M17

M10

Vout

M5

M4

Vout

M10

M10

M6

M8

R1

R2 M7

M11

M13

VB3 M14

Vin

VSS

IB1

M11

VB

VSS

VSS

M18 M8

Fig. 8.

VSS

M7

IB3

Ca

Vinn

M9

M7

M9

M8

Vin

IB2

Combining topological features

VSS

VSS

Fig. 9.

Design refinement (case 1)

Vdd

Vdd

M5

Vdd

Vdd

M9

M10

M11

M12

M6

VDD M1

M2 VDD M8

M9

M10

VB1

M3

Vout IB

M7

VB1

IB

IB1

Vinn

M1

M2

Vinp

VDD

M4

M8

M15 M17 VBP_2 M5 VBN_1 M16 VBN_2 M18 VSS VSS

Vinp

VBN_1

M11

M12

Vinn +

M3

Vout

VDD

VBP_1

M7

M4

IB2

M13

M14

M6 −

utilized the following modules: Memory system including Associative structure used to store the design knowledge used by the cognitive architecture, Connections to goals and Causal sequences, Produce alternatives using incremental operations, Select alternatives, Understanding needs, Prediction about causalities, and Predictions about causalities. Structural synthesis through feature combination. In topological features combination, features are first selected from knowledge based by their justification. Topological features are treated equally important and combined on a reasoningbased flow. Topological features in existing design can be physical structure, e.g., cross-coupled input stage in Fig. 8, or abstractions that correspond to alternative physical structures, i.e. three stage can be implemented in multiple ways. For example, the synthesis of a new high performance OpAmp/OTA utilizes starting features to identify design sequence that is likely to create a performance-satisfying solution. Each step of the sequence corresponds to combining topological features from different circuits or refining topological feature of one existing circuit. Each step is justified that it either improves performance or relaxes the design constraints. Every step is added to the sequence based on its design improvement until the design solution meets specification. The example on combining topological features refers to the design of a new low voltage, low power op-amp (Fig. 8). The starting features of the solution were identified as follows. Two topological features, adaptive biasing class AB input with local common-mode feedback and three stage with frequency compensation, were selected from two different circuits, as they improve slew rate and achieve near-optimal current efficiency. The input stage is general and can be extended to virtually any class AB input stage. Regarding adaptive biasing for decreasing current during sampling phase, this scheme is only suitable for switched capacitor circuit. Stability issues of the gain-boosted cascode structures also needed to be carefully addressed. Thus, the starting ideas selected the abstract feature of a three stage amplifier. Active feedback frequency compensation technique was identified to solve stability issues as it is easier to design and does not consume additional power. As shown in [35], compared to the circuit in [54], the new circuit in Fig. 8 has the following performance for 0.2µm CMOS technology and 5pF capacitive load (first values are for the new circuit): superior Gain of 58.2dB Vs. 30.6dB, higher Gain-Bandwidth Product (24MHz Vs. 16.4MHz), Phase V margin of 84.8◦ Vs. 88.3◦ , and better slew rate, 5.1 µsec Vs. V 4.9 µsec . The static power consumption is 68µW @ 1V Vs.

M9



Vinp

M15 VSS

M5 M6

M6

M11

VB2

M4

+

M4

VBN_2

VSS

Fig. 10.

Design refinement (case 2)

73µW @ 2V. Note that the comparison’s purpose is to show the architecture’s capability to design a new low-power, lowsupply circuit operating at 1V starting from an existing circuit working at 2V. Circuit topology refinement. Refinement selects from the memory system of the cognitive architecture an existing circuit, which offers close specification but does not satisfy the problem yet. Then, topological feature are identified in the long-term memory of the cognitive architecture to be combined with the circuit, so that tradeoffs are modified to satisfy the missing problem requirements. Let’s take the problem of designing a low-power amplifier that optimizes the gain-bandwidth product. The existing design in Fig. 9(left) is selected. It is a high gain, high frequency circuit. Feed-forward compensation path is a feature to compensate frequency without using Miller capacitor. The refinement of the circuit is to realize the three gain stages by cascode, current mirror, and common source stages. The circuit is kept single-input, single-ended output. The revision is causally justified by multistage boosted gain and higher power efficiency. The resulting circuit is shown in Fig. 9(right). As discussed in [34], compared to the original circuit in Fig. 9(left), the new circuit in Fig. 9(right) has for 0.6µm CMOS process and ±1.25V supply voltage (first values are for the new circuit), 13% higher Gain-Bandwidth Product (620MHz Vs. 539MHz), Gain of 73dB Vs. 71dB, the same Bandwidth of 0.15MHz for both circuits, power consumption 2 of 0.65mW Vs. 0.63mW, and Noise of 2.3e−12 VHz @ 20MHz 2 Vs. 1.8e−12 VHz @ 20MHz. The new circuit has faster settling time in response to a step input, 27.24ns @ 20MHz Vs. 33.16ns @ 20MHz, and superior slew rate @ 20MHz, 7.02e9 V 6 V sec Vs. 44.56e sec . THD @ 20MHz of the new circuit was also superior, 13.96% Vs. 28.29%. Another example requires to create a low power, high gain OpAmp with low supply voltage. The existing design in

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

13

Fig. 10(left) is selected for refinement. It is class AB amplifier with class AB input stage and local common-mode feedback. The causal justification to select the circuit is as follows: class AB input stage and local common-mode feedback are mainly for achieving near optimal current efficiency. A refinement of the circuit adds a simple gain-boosting stage causally justified by improving the gain for low supply voltage. Also, a low voltage current mirror updates the original current mirror, causally justified by increased output resistance. The resulting circuit topology is presented in Fig. 10(right). As shown in [34], compared to the original circuit in Fig. 10(left), the new circuit in Fig. 10(right) has for 0.6µm CMOS process and ±1V supply voltage, a 27% higher Gain (62dB Vs. 45dB) while power consumption is only 3% larger (0.1mW Vs. 0.097mW). The other performance values are as follows: Bandwidth, 0.02MHz Vs. 0.18MHz; Gain-Bandwidth Product, 25.2MHz Vs. 32MHz; Phase margin, 54◦ Vs. 61◦ , 2 2 and Noise, 1.2e−14 VHz @ 20MHz Vs. 5.3e−16 VHz @ 20MHz. VI. C ONCLUSIONS This paper presents a cognitive architecture for creative problem solving in analog circuit design, including activities like circuit topology creation (synthesis), incremental topology modification, and design knowledge reuse. These activities refer to open-ended and ill-defined problems, and require discovering new knowledge besides producing a solution. They are difficult to tackle with existing algorithmic methods. The proposed cognitive architecture is based on a new theoretical model having the goal of more robust divergence, like creating new building blocks (BBs) and circuit structures (i.e. connections of BBs). The learned knowledge is reusable for solving new problems and includes new BBs, causal relations between BB parameters and outcomes (circuit performance), reasoning strategies, beliefs, priorities, and architectural parameters. The cognitive architecture is modeled after the main cognitive activities used in creative problem solving, like concept comparison (matching), concept formation, and concept combination. The memory system organizes the domain knowledge into three parts: associative structure, connections to goals, and causal sequences to reflect concept similarities and differences with respect to structure and purpose in solving a problem (justification). In addition, the architecture incorporates modules producing new solutions using incremental operations following five reasoning strategies, predictions about outcomes and causality, understanding needs (i.e. performance bottlenecks), recognizing new BBs, and selection of new solutions from alternatives. The algorithms for these modules are discussed. The cognitive architecture also includes modules modeling the effect of emotions on memory formation and decision making. The latter modules are currently under development. Future work will focus on three possible problems: first, studying how the parameters of the three feedback loops in Fig. 5 adjust for different kinds of open-ended and ill-defined circuit design problems, like problem framing and problem solving (ideation). This is important in real-life as often problems refer to a mixture of problem framing and ideation requirements. Second, future work will address the part for subjective learning and reasoning. The component requires

studying algorithms for modeling beliefs and preferences of individuals, including expert designers. This is relevant not only for replicating a certain design style, but also for devising more effective training guidelines. Finally, work will address using the cognitive architecture for problem solving beyond analog circuit design. We think that the main aspects refer to finding equivalent concept representations, like circuit schematics, and similar embodiments as circuit simulators. R EFERENCES [1] A. Doboli and A. Umbarkar, “The role of precedents in increasing creativity during iterative design of electronic embedded systems,” Design Science, vol. 35, no. 3, pp. 298–326, 2014. [2] G. Goldschmidt, “Capturing indeterminism: representation in the design problem space,” Design Studies, vol. 18, pp. 441–455, 1997. [3]

D. Schon, The Reflective Practitioner.

[4]

A. Newell, J. Shaw, and H. Simon, “Report on a general problemsolving program,” in Proc. International Conference on Information Processing, 1959, pp. 256–264.

BasicBooks, 1983.

[5] J. Anderson, “Aquisition of cognitive skill,” Psychological Review, vol. 89, pp. 369–406, 1982. [6] A. Bandura, “Self-efficacy mechanisms in human agency,” American Psychologist, vol. 37, pp. 122–147, 1982. [7] R. Carley and R. Rutenbar, “How to automate analog ic designs,” IEEE Spectrum Magazine, pp. 26–30, Aug. 1988. [8] R. Harjani, R. Rutenbar, and L. Carley, “Oasys: A framework for analog circuit synthesis,” IEEE Transactions on CAD, vol. 8, no. 12, pp. 1247– 1266, 1992. [9] F. El-Turky and E. Perry, “Blades: an artificial intelligence approach to analog circuit design,” IEEE Transactions on CADICS, vol. 8, no. 6, pp. 680–692, 1989. [10] M. Barros, J. Guilherme, and N. Horta, Analog Circuits and Systems Optimization based on Evolutionary Computation Techniques. Springer, 2010. [11] W. Kruiskamp and D. Leenaerts, “Darwin: Cmos opamp synthesis by means of genetic algorithm,” in Proc. Design Automation Conference, 1995, pp. 433–438. [12] T. McConaghy, P. Palmers, P. Gao, M. Steyaert, and G. Gielen, Variation-aware Analog Structural Synthesis. Springer, 2009. [13]

C. Ferent and A. Doboli, “Measuring the uniqueness and variety of analog circuit design features,” Integration, the VLSI Journal, vol. 44, no. 1, pp. 39 – 50, 2011.

[14]

J. Anderson, The Architecture of Cognition. Harvard University Press, 1983.

[15]

J. Anderson, “Act: A simple theory of complex cognition,” American Psychologist, vol. 51, pp. 355–365, 1996.

[16]

J. Anderson, Learning and Memory. An Integrated Approach. Wiley & Sons, 2000.

[17]

J. Laird, The SOAR Cognitive Architecture.

[18]

D. Vernon, Artificial Cognitive Systems. A primer. 2014.

[19]

D. Kieras and D. Meyer, “An overview of the epic architecture for cognition and performance with application to human-computer interaction,” Journal Human-Computer Interaction, vol. 12, no. 4, pp. 391 – 438, 1997.

[20]

P. Rosenbloom, A. Demski, and U. Volkan, “The sigma cognitive architecture and system: towards functionally elegant grand unification,” vol. 7, no. 1, 2016.

[21]

R. Sun, “A tutorial on clarion 5.0,” Cognitive Science Department, vol. Rensselaer Polytechnic Institute, no. http://www.cogsci.rpi.edu/ rsun/sun.tutorial.pdf, 2003.

John

The MIT Press, 2012. The MIT Press,

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2017.2783344, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

14

[22] D. Friedlander and S. Franklin, “Lida and a theory of mind,” Proc. Conference on advances in artificial general intelligence, pp. 137–148, 2008. [23] N. Hawes and J. Wyatt, “Developing intelligent robots with cast,” in Proc. IROS Workshop Current Software Frameworks in Cognitive Robotics Integrating Different Computational Paradigms, 2008, pp. 14– 18. [24] P. Langley, “Cognitive architectures and general intelligent systems,” AI Magazine, vol. 27, no. 2, pp. 33–44, 2006. [25]

G. Metta and L. N. et al., “The icub humanoid robot: An open system platform for research in cognitive development,” Neural Networks, vol. 23, no. 8–9, pp. 1125–1134, 2010.

[26] J. Hampton, “Emergent attributes in combined concepts,” in T. Ward, S. Smith, J. Viad (eds.) Conceptual Structures and Processes: Emergence Discovery and Change”. American Psychological Association, 1996. [27]

E. Wisniewski, “When concepts combine,” Psychonomic Bulletin & Review, vol. 4, no. 2, pp. 167–183, 1997.

[28]

X. Liu and A. Doboli, “Moving beyond traditional electronic design automation: Data-driven design of analog circuits,” in Proc. Intern’l Conference on Synthesis, Modeling, Analysis, and Simulation Methods and Applications to Circuit Design, 2016, pp. 1–4.

[29]

C. Ferent and A. Doboli, “An axiomatic model for concept structure description and its application to circuit design,” Knowledge-based Systems, vol. 45, pp. 114–133, June 2013.

[30]

C. Ferent and A. Doboli, “Symbolic matching and constraint generation for systematic comparison of analog circuits,” IEEE Transactions on CADICS, vol. 32, no. 4, pp. 616–629, 2013.

[31]

C. Ferent and A. Doboli, “Formal representation of the design feature variety in analog circuits,” in FDL Conference, 2013.

[32]

C. Ferent and A. Doboli, “Analog circuit design space description based on ordered clustering of feature uniqueness and similarity,” Integration, the VLSI Journal, vol. 47, no. 2, pp. 213–231, 2014.

[33] F. Jiao, S. Montano, C. Ferent, A. Doboli, and S. Doboli, “Analog circuit design knowledge mining: Discovering topological similarities and uncovering design reasoning strategies,” IEEE Transactions on CADICS, vol. 34, no. 7, pp. 1045–1059, 2015. [34] F. Jiao and A. Doboli, “Knowledge-intensive, causal reasoning for analog circuit topology synthesis in emergent and innovative applications,” in Proc. Design, Automation and Test in Europe Conference (DATE), 2015, pp. 1144–1149. [35]

F. Jiao and A. Doboli, “A low-voltage, low-power amplifier created by reasoning-based, systematic topology synthesis,” in Proc. Intern’l Symposium on Circuits and Systems (ISCAS), 2015, pp. 2648–2651.

[36]

B. Lake, R. Salakhutdinov, and J. Tenenbaum, “Human-level concept learning through probabilistic program induction,” Science, vol. 350, no. 6266, pp. 1333–1338, 2015.

[37]

D. Kahneman and A. T. (editors), Choices, Values, and Frames. Cambridge University Press, 2000.

[38]

J. Hampton, “Similarity-based categorization and fuzziness of natural categories,” Cognition, vol. 65, no. 2–3, pp. 137 – 165, 1998.

[39]

A. Markman and B. Ross, “Category use and category learning,” Psychological Bulletin, vol. 129, no. 4, pp. 592–613, 2003.

[40]

A. Doboli, A. Umbarkar, S. Doboli, and J. Betz, “Modeling semantic knowledge structures for creative problem solving: Studies on expressing concepts, categories, associations, goals and context,” Knowledgebased Systems, vol. 78, pp. 34–50, April 2015.

[41] M. Schilling, “A small-world network model for cognitive insight,” Creativity Research Journal, vol. 17, no. 2–3, pp. 131–154, 2005. [42]

M. Steyvers and J. Tenenbaum, “The large-scale structure of semantic networks: statistical analyses and a model of semantic growth,” Cognitive Science, vol. 29, no. 1, pp. 41–78, 2005.

[43]

A. Wagner, The Origins of Evolutionary Innovations. Oxford University Press, 2011.

[44] K. Doya, S. Ishii, A. Pouget, and R. Rao, Bayesian Brain. The MIT Press, 2007. [45] K. Stenning and M. V. Lambalgen, Human Reasoning and Cognitive Science. The MIT Press, 2008. [46] R. Adolphs and A. Damasio, “The human amygdala in human judgement,” Nature, vol. 393, no. 6684, pp. 470–474, 1998. [47] J. Tao and T. Tieniu, “Affective computing: A review,” in Affective Computing and Intelligent Interaction. LNCS 3784. Springer, 2005, pp. 981–995. [48] H. Tang, H. Zhang, and A. Doboli, “Refinement based synthesis of continuous-time analog filters through successive domain pruning, plateau search and adaptive sampling,” IEEE Transactions on CADICS, vol. 25, no. 8, pp. 1421–1440, 2006. [49] L. Barsalou, “Grounded cognition,” Annual Review of Psychology, vol. 59, pp. 617 – 645, Jan. 2008. [50] H. Li and A. Doboli, “Analog circuit topological feature extraction with unsupervised learning of new substructures,” in Proc. Design, Automation and Test in Europe Conferenvce, 2016, pp. 1509–1512. [51] T. Massier, H. Graeb, and U. Schlichtmann, “The sizing rules method for CMOS and bipolar analog integrated circuit synthesis,” IEEE Transactions on CADICS, vol. 27, pp. 2209–2222, 2008. [52] N. Rubanov, “Subislands: The probabilistic match assignment algorithm for subcircuit recognition,” IEEE Transactions on CADICS, vol. 22, no. 1, pp. 26–38, 2003. [53] Cadence Design Systems, Virtuoso Analog Design Environment XL User Guide Product Version 6.1.5, Cadence Advanced Analysis Tools User Guide, 2011. [54] A. Lopez-Martin, S. Baswa, J. Ramirez-Angulo, and R. Carvajal, “Lowvoltage super class ab cmos ota cells with very high slew rate and power efficiency,” IEEE Journal on Solid State Circuits, vol. 40, no. 5, pp. 1068–1077, 2005. Hao Li (S’15) received the B.S. degree from Beijing University of Posts and Telecommunications, Beijing, China, in 2012. He is currently pursuing the Ph.D. degree in Computer Engineering at Stony Brook University, Stony Brook, NY, USA. His current research interests include design automation in analog circuits, mainly in design knowledge mining and categorization, cognitive architecture design. Fanshu Jiao (S’14-M’17) received the B.S. degree from the University of Science and Technology of China, Hefei, China, in 2011, and the Ph.D. degree from the State University of New York at Stony Brook, Stony Brook, NY, USA in 2016. Her current research interests include analog circuit design automation, particularly methods for design knowledge mining, and development of knowledge mining tools. Xiaowei Liu (S’15) Xiaowei Liu received the bachelor’s degree in communication engineering from Beijing University of Posts and Telecommunications, China, in 2012. She is currently working towards her PhD degree in the Electrical and Computer Engineering in Stony Brook University. Her research interests include knowledge discovery in communities and CPS design for social applications. Alex Doboli (S’99-M’01-SM’07) received the M.S. and Ph.D. degrees in Computer Science from “Politehnica” University, Timisoara, Romania, in 1990 and 1997, respectively, and the Ph.D. degree in Computer Engineering from University of Cincinnati, Cincinnati, OH, in 2000. He is a Professor at the Department of Electrical and Computer Engineering, Stony Brook University (SUNY), NY. His research is mainly in mixed-signal CAD. Simona Doboli photograph and biography not available at the time of publication.

0278-0070 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.