Top down Design

Bottom-Up Versus Top-Down Design

Rex Hartson , Pardha Pyla , in The UX Book (Second Edition), 2019

13.4.2 Characteristics of Top-Down Design

Top-down design can be perceived as visionary. Because top-down designs are not constrained by current work practice, they can turn out startlingly different and even futuristic designs, which shows how top-down design can be visionary.

Top-down design is heavily driven by domain knowledge. Designers need extensive domain knowledge to be able to abstract the nature of work in that domain. This usually translates to the need for designers to envision multiple work activity instances in that domain.

Being a potential user in the domain is good for a designer. An important factor in the success of Apple designers practicing top-down design is the fact that they could see themselves as users of the iPod and iPad, etc.

Another example of a domain in which designers can see themselves as users is photography. To design a photo editing and management application, it would help, perhaps even be essential, for designers to be experts in photography and even avid users of such a product, which would help with immersion necessary to be able to creatively think about the problem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128053423000138

Integrated System Modeling

Peter J. Ashenden , ... Darrell A. Teegarden , in The System Designer's Guide to VHDL-AMS, 2003

25.1 Top-down Design

Top-down design methodologies have had a profound impact on digital system design. They allow us to quickly and efficiently specify, design, synthesize and verify designs ready for fabrication. The key to these methodologies is synthesis, which relies on a mapping between the logical functions we use in a design and the physical circuits that realize the functions. Synthesis technology allows us to work at higher levels of abstraction and to delegate physical implementation details to automatic tools.

The number of digital building blocks required for synthesis is relatively small. Most digital designs can be implemented with a handful of basic logical gates (for example, and, or, not, nand, nor and xor gates) and storage devices (for example, registers, flipflops and latches). The same cannot be said for analog, mixed-signal and mixed-technology designs. The building blocks for these design domains are far more numerous and sophisticated. Furthermore, their behavior cannot be captured as readily as that of digital building blocks. Nonetheless, a methodical top-down design approach is appealing.

Unfortunately, analog synthesis technology is very much in its infancy. We do not have comprehensive synthesis tools for every analog, mixed-signal or mixed-technology design domain; however, we do have some tools for specific applications, such as filter design. Such tools are similar to digital synthesis tools in that they start with a function specification of a system, they rely on fixed circuit topologies and building blocks for implementation, they are retargetable to different technologies and the results are verifiable through low-level simulation.

In the absence of general synthesis tools, we must resort to manual design refinement. In this case, a well-ordered top-down design flow is even more important to manage complexity. Figure 25-1 compares a digital design flow based on top-down design and bottom-up verification with a possible flow for analog, mixed-signal and mixed-technology design. (This diagram is derived from work by Pratt 32. It was originally developed with a focus on mixed-signal IC design, but is extended to cover the design of larger and more diverse systems.) There are many parallels and several significant differences between the two flows. In this chapter, we explore the similarities and differences, discuss the role of VHDL-AMS in each, and illustrate the steps with examples from our RC airplane case studies.

FIGURE 25-1. Top-down design for digital versus analog/mixed-signal/mixed-technology.

While VHDL-AMS facilitates top-down design, use of the language is not in itself sufficient to guarantee success. Top-down design also depends on proper staffing and work methodologies. The design process described here has many similarities to software development processes. The use of VHDL-AMS is not unlike the use of UML (Universal Modeling Language 15) for software design and analysis. The suite of test benches for the system model are similar to a set of use cases for software analysis. Modern software development best practices dictate that certain methodologies increase the likelihood of success. The use of VHDL-AMS can also benefit from these practices.

Modern best practices include such concepts as configuration management, where an individual or group manages and controls changes to the design or code. This ensures a stable and predictable environment in which to develop new work, without confusion caused by simultaneous changes made by other developers. Configuration management also includes the concept of revision control, allowing a historical sequence of design changes to be captured, managed and, if necessary, recalled. Finally, configuration management includes the notion of orderly integration and merging of changes into a carefully controlled main flow of work destined to become the delivered product. Typically, integration is done by a change control board, which is a small group of individuals responsible for the final work products.

In software development projects it is common for there to be an individual (or small group) responsible for the overall management of the system architecture. It is their job to be able to answer the top-level system design and implementation questions. For top-down design using VHDL-AMS, it is also helpful to have an individual responsible for the overall architecture and its representation and simulation in VHDL-AMS. The system architect facilitates and coordinates the efforts of all of the developers and can significantly improve everyone's effectiveness.

Software development efforts often employ individuals with specialized skills, such as software builders and quality-assurance engineers. These individuals work with the software developers to integrate components into the system and verify that the system requirements are met. Specialists are also useful in top-down design projects using VHDL-AMS. They can help to integrate the VHDL-AMS models of subsystems into an overall system model and verify performance using test benches.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558607491500256

Prototyping

Merle P. Martin , in Encyclopedia of Information Systems, 2003

II.A Prototyping Approaches

Prototyping can be used as a tool within the SDLC or a methodology within which the SDLC proceeds. Three different approaches to using prototyping are rapid application development (RAD), radical top-down, and bottom-up prototyping.

Often RAD is confused with prototyping itself. Yet prototyping is only part of the RAD methodology. Burch (1992) defines RAD as: "a methodology that combines joint application development, specialists with advanced tools (SWAT) teams, computer-aided system and software engineering tools, and prototyping to develop major parts of a system quickly."

The radical top-down design approach emphasizes timely delivery of a functional system even if that system is not yet completed. It counters traditional approaches where the system is not delivered until complete, but it is far too often delivered later than promised. Radical top-down design proceeds as follows:

A complete systems hierarchical chart is developed.

Lower level (primitive) modules are prioritized based upon user needs and criticality to the total system.

Control (upper-level) modules are developed and tested first to establish the system's navigation path.

Lower level modules are "stubbed out." A program stub is an incomplete program module that, when called, immediately passes control back to the calling module, displaying a message such as "Under Construction."

One by one, the lower level modules are developed and "unstubbed" according to module priority.

This continues until the promised delivery date arrives, when the incomplete but functional application system is delivered to the client.

Work continues on lower priority modules until the system is complete.

This approach lends itself quite well to use of prototyping since, as discussed later, prototyping is a top-down process.

With Bottom-up prototyping some system developers are skeptical of a top-down approach because it may delay the more intricate and complex programming (primitive modules) until too late in the SDLC. Some organizations prefer to use prototyping, not as an overall methodology, but as an optional tool that can be used within each SDLC stage. In this approach, prototypes tend to be of a narrower scope but deeper in detail.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404001404

DSP Integrated Circuits

Lars Wanhammar , in DSP Integrated Circuits, 1999

Top-Down Approach

In the top-down design approach, the whole system is successively partitioned into a hierarchy of subsystems. On the top level a behavioral description is used. This description is partitioned into a structural description with behavioral descriptions of the components. This process of decomposition is then repeated for the components until sufficiently simple components are obtained. The end result is a functional description of the system. The subsystems are assumed to be implemented by the corresponding hierarchy of virtual machines. Of course, the design becomes easier if these hierarchies are made similar or identical. Figure 1.13 illustrates the top-down approach 1 using a structural decomposition. The design process (partitioning) will essentially continue downward with stepwise refinement of the subsystem descriptions [13]. It is advantageous if the partitioning is done so that the complexities at all hierarchical levels are about the same.

Figure 1.13. The top-down approach

In the top-down approach we stepwise develop the final system by realizing and validating each design level in software. By first building the DSP system in software, the performance can be more accurately estimated. Correctness of the design as well as of the specification can be verified or validated before making a commitment to a particular technology and investing in expensive hardware design. The subsystems are in each design iteration described by using more and more details so that they become closer and closer to their intended implementation. An advantage of this approach is that the system is developed from a global specification and that the successive design models can be checked for their correctness since they are described using an executable language. The top-down approach guarantees that larger and more important questions are answered before smaller ones.

As mentioned before, and illustrated in Figure 1.14, a typical system design begins with the development of a prototype (non-real-time) of the whole DSP system using either a conventional language, such as C, or, preferably, a hardware description language such as VHDL. The latter will be described in brief in section 1.6.6.

Figure 1.14. Top-down design strategy

After the validation of this initial (often sequential) description of the DSP system, it can be used as the basic system description. Subsequently, the system is hierarchically decomposed into a set of subsystems that at the lowest level implement well-known functions. This is one of the most important tasks in the system design phase—to partition the whole system into a set of realizable subsystems and to determine their individual specifications—because the partitioning will have a major effect on the system performance and cost. Typically, the new system description, which has explicit descriptions of the subsystems, is first derived without regard to time. However, it is advantageous to use at this stage, for example, VHDL, instead of a conventional sequential computer language since such languages do not have mechanisms for describing time and parallel execution of the subsystems.

Generally, a sequential execution of the subsystems cannot meet the real-time requirements of the application. In the next design step, called the scheduling phase, the sequential description is therefore transformed into a parallel description where the subsystems are executed concurrently. In this step, synchronization and timing signals must be introduced between the subsystems.

If a satisfactory solution cannot be found at a certain design level, the design process has to be restarted at a higher level to ensure a correct design. Indeed, the whole design process is in practice an iterative process. Often the whole system design can be split into several parallel design paths, one branch for each main block. The different parts of the system can therefore often be designed by independent design teams.

The next design step involves the mapping of the algorithms that realize the subsystems onto suitable software–hardware structures. This design step can be performed using the strategies discussed in sections 1.3 and 1.4.

In the direct mapping approach, discussed in section 1.4.2, the operations are scheduled to meet the throughput requirements and at the same time minimize the implementation cost. Scheduling techniques for this purpose will be discussed in detail in Chapter 7. Further, in this design step a sufficient amount of resources (i.e., processing elements, memories, and communication channels) must be allocated to implement the system. Another important problem in this design phase isto assign each operation to a resource on which it will be executed. The next designstep involves synthesis of a suitable architecture with the appropriate number of processing elements, memories, and communication channels or selection of astandard (ASIC) signal processor. In the former case, the amount of resources as well as the control structure can be derived from the schedule. The implementation cost depends strongly on the chosen target architecture. This design step willbe discussed in Chapters 8 and 9.

The last step in the system design phase involves logic design of the functional blocks in the circuit architecture [1,14]. The result of the system design phase is a complete description of the system and subsystems down to the transistor level.

Figure 1.15 shows a typical sequence of design steps for a digital filter. The passband, stopband, and sample frequencies and the corresponding attenuations are given by the filter specification. In the first step, a transfer function meeting the specification is determined. In the next step, the filter is realized using a suitable algorithm. Included in the specification are requirements for sample rate, dynamic signal range, etc.

Figure 1.15. Idealistic view of the design phases for a digital filter

The arithmetic operations in the algorithm are then scheduled so that the sample rate constraint is satisfied. Generally, several operations have to be performed simultaneously. The scheduling step is followed by mapping the operations onto a suitable software–hardware architecture. We will later present methods to synthesize optimal circuit architectures.

The final design step involves the logic design of the architectural components, i.e., processing elements, memories, communication channels, and control units. Communication issues play an important role, since it is expensive in terms of time and power consumption to move information fromone point to another on the chip. The final result of the subsystem design phase isa circuit description in terms of basic building blocks: gates, full-adders, flip-flops, RAMs, etc. This description is then used as a specification in the circuit design phase.

The use of a top-down design methodology also forces the designer to carefully define the module interfaces, i.e., use abstractions. In return, the well-defined periphery of a module and its internal function suffice to describe the module at the next higher level in the design hierarchy. This allows internal details to be hidden, so that they do not obstruct the analysis at the next higher level.

The hierarchy of abstractions can, and should, also be used to reduce the volume of design data and to provide suitable representations to speed up the operation of the computer-aided design tools. Note that it may be necessary to store, retrieve, display, and process several hundred megabytes of data if a nonhierarchical approach is used.

The top-down approach relies on the designer's experience since the partitioning must lead to realizable subsystems. From the manager's point of view, it is easy to monitor the progress of the project and check it against the time schedule.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780127345307500015

Structured Design Methodologies

Konrad Morgan , in Encyclopedia of Information Systems, 2003

IX. Summary

As we have seen, structured design is a disciplined approach to information systems design based on a series of techniques for factoring information systems design into independent modules. In the process of functional decomposition of top-down design, the major logical components or data transformations within the system are identified and relationships between them are established.

Designs are developed using a semiformal notation, usually in terms of either the logical processes (pseudo-code or structured English) or the data flows (data flow diagrams) within the system in a top-down hierarchy of modules or processes. Design decisions are based on what the problem is and not on how it is to be solved. Structured design uses tools and formal notations, especially graphic ones, to render systems understandable.

The following are design techniques common to all structured design methodologies:

Top-down design (functional decomposition): Top-down design specifies the solution to a problem in general terms and then divides the solution into finer and finer details until no more detail is necessary.

Modularization: This is a concept to help the designer and later the programmer to break a complex design into smaller sub-designs often called modules (see also procedures and functions).

Modules: This is a set of design solving actions or data transformations that can be tested and verified independently of their use in a larger design. Well-designed modules have hierarchical relationships made up of a series of modules, each with their own function (cohesion) and with a simple control path which features a single entry and exit point (no or low coupling).

Use of the three control constructs: The pseudocode or structured English produced within structured design should limit itself to using the three control constructs of sequence, selection, and repetition.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404001726

Introduction to MATLAB Programming

Stormy Attaway , in MATLAB (Fifth Edition), 2019

Abstract

The chapter introduces the idea of algorithms and scripts. Programming style, such as the top-down design approach and use of commenting for documentation, are explained. Input and output statements are introduced, including the formatting of output. Simple plot functions and functions that annotate plots are introduced in this chapter as it is easiest to produce plots using scripts. Reading from a file using load and writing to a file using save is demonstrated. The concept of a user-defined function is introduced with the type of function that calculates and returns a single value. The use of stand-alone code files for functions and the use of local functions within script files are both demonstrated.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128154793000039

Integrated Circuit Design

Lars Wanhammar , in DSP Integrated Circuits, 1999

12.3.4 The Unconstrained-Cell Design Approach

The unconstrained-cell design or macrocell design approach allows arbitrary shapes and placement of the building blocks. Typically, the layout of the building blocks is synthesized from high-level functional or behavioral descriptions and the designer has only limited control of their shapes. A bottom-up assembly of the building blocks therefore becomes an important design step. Significant amounts of chip area may therefore appear between the blocks. However, this area is not wasted, because it can be used to implement decoupling capacitors between the power lines. Such capacitors are necessary to reduce switching noise. Figure 12.9 illustrates a floor plan with routed wires for a chip that is designed using the unconstrained-cell approach. Only the highest level in the cell hierarchy is shown.

Figure 12.9. Floor plan of a chip designed with the unconstrained-cell approach

Of course, it is also possible to combine the unconstrained-cell design approach with a top-down design process. The first step is to estimate the shape and chip area needed for the different functions—e.g., processors, memories, and control units. These blocks and the necessary I/O modules are given a placement in the next step, which is called floor planning, and the wires between the blocks are routed. The goal of the placement and routing steps is to minimize the required chip area while satisfying the timing restrictions. This process is repeated for all levels in the design hierarchy until the remaining modules are simple cells that can easily be designed or are available in the cell library. Noncritical parts of a chip are often implemented using a standard-cell approach.

EXAMPLE 12.1

Figure 12.10 show a layout of a 128-point FFT/IFFT processor with two RAMs and a single butterfly processing element.

Figure 12.10. A 128-point FFT/IFFT processor using the unconstrained-cell approach

An unconstrained-cell approach is used for all blocks. The processor computes a 128-point FFT, or IFFT in 64   ms (including I/O). The architecture is analogous to the FFT in the case study. The processing element runs at 128   MHz, the bit-serial control unit runs at 256   MHz, and the memory runs at 32   MHz. The core area is 8.5   mm2 and the total chip area including pads is 16.7   mm2. The number of devices is 37,000. The chip was fabricated in AMS 0.8-μm double metal CMOS process. The power consumption is about 400   mW at 3.0   V. The power consumption is high in this case, mainly because of the use of the TSPC logic style and an overdesigned clock driver and clock network.

The aim of the unconstrained-cell design approach is to achieve high performance circuits and at the same time reduce the amount of design work by using previously designed building blocks or blocks that can be generated by a synthesis tool. Standardization is only used at the lowest layout level to reduce the design space. It takes the form of design rules that impose geometrical and topological constraints on the layout. These constraints originate from two sources:

Limitations in the manufacturing process—for example, misalignment among mask layers, minimum width, and spacing of features.

Physical limitations of circuits such as electromigration, current densities, junction breakdown, punch through, and latch-up.

Often, a Manhattan geometry is imposed on the layout. In this standardization of the layout, the wires are required to run only horizontally or vertically. The Manhattan layout style significantly simplifies the checking of the design rules. Many design rule checkers only accept Manhattan layouts.

Characteristics for the unconstrained-cell approach are:

Generally, no restrictions are put on the shape, area, and placement of modules or the wire routing. The layout of the low-level cells is done in detail and allows all aspects of the cells to be optimized, but it is costly in terms of design time. This cost may be reduced by using automatic design tools for the cell design.

The design involves not only schematic and netlist entry, but also detailed layout, design rule checking, logic and electrical simulation, etc. As opposed to the semicustom approaches, the designer is responsible for the whole design.

Sophisticated software is required to generate the building blocks.

The potential device density and switching speed are very high. Typical device densities for a double metal, 0.8-μm CMOS process are in the range of 6000 to 11000 transistors per mm2. However, device density, normalized with respect to the shrinking geometry, which are achieved in microprocessor chips are continuing to decrease [12]. The reason for this decline in device density is to be found in the inefficient design methodologies and design tools that are being used.

Turnaround time and cost are the same as for standard-cell designs.

The design time may be long and less well controlled.

Changing vendor or VLSI process may be difficult, but may be somewhat simpler if automatic or semi-automatic layout tools are used.

Digital and analog circuits can be mixed. Medium- to relatively large-size memories can be implemented in a standard process for digital circuits.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012734530750012X

Circuit Modeling with Hardware Description Languages

In Top-Down Digital VLSI Design, 2015

4.5 Conclusions

The universal adoption of VHDL and SystemVerilog is due to their many paying benefits:

+

They support a top-down design methodology of successive refinements from behavioral simulations down to gate-level netlists using a single standard language.

+

RTL synthesis does away with all lower-level schematic drawings in a typical VLSI design hierarchy, saving significant time and effort.

+

HDLs enable sharing, reusing and porting of subfunctions and -circuits in a parametrized and therefore more useful form than schematic diagrams.

+

Automatic technology mapping makes it unnecessary to commit an HDL-based design to some specific cell library or fabrication process until late in the design cycle, even allowing for retargeting between field- and mask-programmable ASICs.

+

VHDL and SystemVerilog also support the coding of simulation testbenches, albeit not to the same degree, see fig. 4.24. More on this is to follow in chapter 5.

Figure 4.24. The capabilities of the three predominant HDLs at a glance.

Learning to master VHDL or SystemVerilog may be daunting.

While the IEEE 1076 and IEEE 1800 standards are fully supported for simulation, only a subset is amenable to synthesis because the languages have not originally been developed with synthesis in mind. This is not a problem for informed designers, however.

Timing constraints and synthesis directives are not part of VHDL and SystemVerilog and must be captured using proprietary languages. There also is a lack of agreement between tool vendors on what constructs the synthesis subset ought to include and when to support new constructs introduced with past standard revisions.

A gap remains between system design, which focuses on overall circuit behavior and transactions on high-level data, and actual hardware design, which involves many structural and implementation-specific issues. The necessary manual translation from a purely behavioral model to RTL synthesis code and the ensuing re-entry of design data are inefficient and lead to errors and misinterpretations.

The impact of coding style on combinational random logic tends to be overstated. Also, do not expect timing-wise synthesis constraints to do away with architectural bottlenecks. All too often, their effects are limited to buying moderate performance gains at the expense of substantially larger circuits.

The most important engineering decisions that set efficient designs apart from inefficient ones do not relate to HDLs, but to architectural issues. Algorithmic and architectural questions must be answered before the first line of synthesis code is written.

HDL synthesis does not do away with architecture design!

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128007303000046

Database Development Process

Ming Wang , Russell K. Chan , in Encyclopedia of Information Systems, 2003

I.C.2.e. Bottom-up Design and Normalization

Normalization can also be used as the bottom-up approach for the design of relational databases. It is far less popular than the top-down design approach such as ER and EER and is only used for very small and simple databases. An example of this normalization approach would be to create a database using data from a flat file on an existing system. Functional dependency analysis is used to group attributes into relations that represent types of entities. The process of normalizing Table II from 1NF to 3NF is an example. If we take Table II as a flat data file, we have successfully converted it into a small database through the process of normalization.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404000265

Service-Oriented Architecture

James McGovern , ... Sunil Mathew , in Java Web Services Architecture, 2003

Modular Decomposability

The modular decomposability of a service refers to the breaking of an application into many smaller modules. Each module is responsible for a single, distinct function within an application. This is sometimes referred to as "top-down design," in which the bigger problems are iteratively decomposed into smaller problems. For instance, a banking application is broken down into a savings account service, checking account service, and customer service. The main goal of decomposability is reusability. The goal for service design is to identify the smallest unit of software that can be reused in different contexts. For instance, a customer call-center application may need only the customer's telephone number and thus need access only the customer service to retrieve it.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558609006500051