The Basic Model of Computation- (O-LEVEL-PYTHON UNIT- I)


Add the following set of numbers:





The Basic Model of Computation





  • model of computation is a model that describes how an output of a mathematical function is computed given an input.
  • Describes how units of computations, memories, and communications are organized.
  • Measured the computational complexity of an algorithm.
  • Allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
  • Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.




  • Models differ in their expressive power; for example, each function that can be computed by a finite state machine can also be computed by a Turing machine, but not vice versa.
  • A non-deterministic model of computation is associated with some of these models of computation. Non-deterministic models are not useful for practical computation; they have used in the study of the computational complexity of algorithms.




Add the following set of numbers:





Sequential models include:





  • Finite state machines
  • Pushdown automata
  • Random access machines
  • Turing machines




Functional models include:





  • Lambda calculus
  • General recursive functions
  • Combinatory logic
  • Abstract rewriting systems




Concurrent models include:





  • Cellular automaton
  • Digital circuits
  • Kahn process networks
  • Petri nets
  • Synchronous Data Flow
  • Interaction nets
  • Actor model




Add the following set of numbers:





Sequential computational model





  • sequential computational model is one in which instructions are executed one after another.
  • There may be branches in the program, but the general principle is that each instruction follows on from the previous one.
  • Python programs are sequential.

    A parallel computational model is one in which each program instruction is executed simultaneously on multiple processors in order to get the results faster. 
  • Using multi-cores in processors is an example of parallel computing. It is by using parallel processing that supercomputers are getting faster and faster.




Add the following set of numbers:





 





A.  Finite-state machine





  • finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.
  • state is a description of the status of a system that is waiting to execute a transition.
  • It is an abstract machine that can be in exactly one of a finite number of states at any given time.
  • It can change from one state to another in response to some inputs; the change from one state to another is called a transition.
  • A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station.
  • Defined by a list of its states, its initial state, and the inputs that trigger each transition.
  • Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines.
  • A deterministic finite-state machine can be constructed equivalent to any non-deterministic one.
  • The behaviour of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which allot products when the proper combination of coins is deposited, elevators, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, and combination locks, which require the input of a sequence of numbers in the proper order.
  • The finite-state machine has less computational power than some other models of computation such as the Turing machine. 
  • The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot because an FSM's memory is limited by the number of states it has. FSMs are studied in the more general field of automata theory.




In some finite-state machine representations, it is also possible to associate actions with a state:





  • an entry action: performed when entering the state, and
  • an exit action: performed when exiting the state.












 





A.   Pushdown automaton





Add the following set of numbers:





Pushdown automata are used in theories about what can be computed by machines.





They are more capable than finite-state machines but less capable than Turing machines. 





Deterministic pushdown automata can recognize all deterministic context-free languages while non-deterministic ones can recognize all context-free languages, with the former often used in parser design.





The term "pushdown" refers to the fact that the stack can be regarded as being "pushed down" like a tray dispenser at a cafeteria, since the operations never work on elements other than the top element.





stack automaton, does allow access to and operations on deeper elements and recognize a strictly larger set of languages than pushdown automata. 





A nested stack automaton allows full access, and also allows stacked values to be entire sub-stacks rather than just single finite symbols.





pushdown automaton (PDA) differs from a finite state machine in two ways:





  1. It can use the top of the stack to decide which transition to take.
  2. It can manipulate the stack as part of performing a transition.




A pushdown automaton reads a given input string from left to right.





In each step, it chooses a transition by indexing a table by input symbol, current state, and the symbol at the top of the stack.





A pushdown automaton can also manipulate the stack, as part of performing a transition.





The manipulation can be to push a particular symbol to the top of the stack, or to pop off the top of the stack.





The automaton can alternatively ignore the stack, and leave it as it is.





A.   Random-access machine





Add the following set of numbers:





In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines.





It is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the counter machine the RAM has its instructions in the finite-state portion of the machine (the so-called Harvard architecture).





The RAM's equivalent of the universal Turing machine – with its program in the registers as well as its data – is called the random-access stored-program machine or RASP.





It is an example of the so-called von Neumann architecture and is closest to the common notion of computer.





The RAM and RASP models are used for computational complexity analysis.





The concept of a random-access machine (RAM) starts with the simplest model of all, the so-called counter machine model. Values are stored in one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".





This model identical to a multiple-register counter machine with the addition of indirect addressing. At the decision of instruction from its finite state machine's TABLE, the machine derives a "target" register's address either (i) directly from the instruction itself, or (ii) indirectly from the contents (e.g. number, label) of the "pointer" register specified in the instruction.





A.   Turing machine





Turing machine is a mathematical model of computation that defines an abstract machine which manipulates symbols on a strip of tape according to a table of rules.





It is capable of simulating that algorithm's logic can be constructed.





The machine operates on an infinite memory tape divided into discrete "cells".





The machine positions its "head" over a cell and "reads" or "scans" the symbol there.





Then, as per the symbol and the machine's own present state in a "finite table" of user-specified instructions, the machine





  • writes a symbol (e.g., a digit or a letter from a finite alphabet) in the cell (some models allow symbol erasure or no writing), then
  • either moves the tape one cell left or right (some models allow no motion, some models move the head), then
  • (as determined by the observed symbol and the machine's own state in the table) either proceeds to a subsequent instruction or halts the computation.




The Turing machine was invented in 1936 by Alan turning who called it an "a-machine" (automatic machine). 





Add the following set of numbers:





Function model





Add the following set of numbers:





In systems engineering, software engineering, and computer science, a function model or functional model is a structured representation of the functions (activities, actions, processes, operations) within the modelled system or subject area.





A function model, similar to the activity model or process model, is a graphical representation of an enterprise's function within a defined scope.





The purposes of the function model are to describe the functions and processes, assist with the discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs.





This model include various models:-





  • Lambda calculus
  • General recursive functions
  • Combinatory logic
  • Abstract rewriting systems




Lambda calculus 





Lambda calculus (also written as Î»-calculus) is a formal system in mathematic logic for expressing computation based on function abstractions and application using variable binding and substitution.





It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics.





Lambda calculus consists of constructing lambda terms and performing reduction operations on them. In the simplest form of lambda calculus, terms are built using only the following rules:





Add the following set of numbers:





SyntaxNameDescription
xVariableA character or string representing a parameter or mathematical/logical value.
(λx.M)AbstractionA function definition (M is a lambda term). The variable x becomes bound in the expression.
(M N)ApplicationApplying a function to an argument. M and N are lambda terms.




Add the following set of numbers:





Producing expressions such as: (λx.λy.(λz.(λx.z x) (λy.z y)) (x y)). Parentheses can be dropped if the expression is unambiguous. For some applications, terms for logical and mathematical constants and operations may be included. Producing expressions जैसे: (λx.λy. (λz। (Λx.z x)) (λy.z y)) (x y))।





Add the following set of numbers:





General Recursive function





In mathematical logic and computer science a general recursive function (often shortened to recursive function) or Î¼-recursive function, is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense.





In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the church-Turing thesis).





The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions.





Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms.





The Î¼-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number.





They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the Î¼ operator.





The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimization) is the class of primitive recursive functions.





While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined.





The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackerman function can be proven to be total recursive and to be non-primitive.





Add the following set of numbers:





Combinatory logic





Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell curry and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinatory which was introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. 





A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.





Add the following set of numbers:





Abstract rewriting system





In mathematical logic and theoretical computer science, an abstract rewriting system (also (abstract) reduction system or abstract rewrite system; abbreviated ARS) is a formalism that captures the quintessential notion and properties of rewriting systems.





In its simplest form, an ARS is simply a set  (of "objects") together with a binary relation, traditionally denoted with {\displaystyle \rightarrow }; this definition can be further refined if we index (label) subsets of the binary relation.





an ARS is sufficient to describe important properties of rewriting systems like normal forms, termination, and various notions of confluence.





An abstract reduction system (ARS) is the most general (unidimensional) notion about specifying a set of objects and rules that can be applied to transform them. More recently, authors use the term abstract rewriting system as well. 





An ARS is a set  A, whose elements are usually called objects, together with a binary relation on A, traditionally denoted by →, and called the reduction relationrewrite relation or just reduction.





This terminology using "reduction" is a little misleading, because the relation is not necessarily reducing some measure of the objects.





Add the following set of numbers:





Concurrent models 





In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome.





This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems.





In more technical terms, concurrency refers to the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units.





Add the following set of numbers:





Concurrent models include:





  • Cellular automaton
  • Digital circuits
  • Kahn process networks
  • Petri nets
  • Synchronous Data Flow
  • Interaction nets
  • Actor model




Cellular automaton





cellular automaton (abbrev. CA) is a discrete model of computation studied in automata theory.





Cellular automata are also called cellular spacestessellation automatahomogeneous structurescellular structurestessellation structures, and iterative arrays. 





Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modelling.





Automata (the plural of automaton) comes from the Greek word which means "self-making". An automaton (Automata in plural) is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically.





Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science.





A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off.





The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighbourhood is defined relative to the specified cell.





An initial state (time t = 0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighbourhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known, such as the stochastic cellular automaton and asynchronous cellular automaton.





The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann at Los Alamos National Laboratory.





Cellular automata can simulate a variety of real-world systems, including biological and chemical ones.





Add the following set of numbers:





Digital circuits (Logic gate)





                A logic gate is an idealized model of computation or physical electronic device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output.





Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be constructed using vacuum tubes, electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or even mechanical elements.





All of the algorithms and mathematics that can be described with Boolean logic.





Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors, which may contain more than 100 million gates. In modern practice, most gates are made from MOSFETs (metal–oxide–semiconductor field-effect transistors).





Kahn process networks









Kahn process networks (KPNs, or process networks) is a distributed model of computation where a group of deterministic sequential processes are communicating through unbounded FIFO channels.





The resulting process network displays deterministic behaviour that does not depend on the various computation or communication delays.





The model was originally developed for modelling distributed systems but has proven its convenience for modelling signal processing systems.





KPNs have found many applications in modelling embedded systems, high-performance computing systems, and other computational tasks.





KPNs were first introduced by Gilles Kahn.





Add the following set of numbers:





Petri net





Petri net, also known as a place/transition (PT) net, is one of several mathematical modelling languages for the description of distributed systems. It is a class of discrete event dynamic system.





A Petri net is a directed bipartite graph that has two types of elements, places and transitions, depicted as white circles and rectangles, respectively. A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token.. Some sources.





State that Petri nets were invented in August 1939 by Carl Adam Petri—at the age of 13—for the purpose of describing chemical processes.





Like industry standards such as UML activity diagrams, Business Process Model and Notation and event-driven process chains, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution.





Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.





A Petri net consists of placestransitions, and arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.





Graphically, places in a Petri net may contain a discrete number of marks called tokens.





Any distribution of tokens over the places will represent a configuration of the net called a marking.





In an abstract sense relating to a Petri net diagram, a transition of a Petri net may fire if it is enabled, i.e. there are sufficient tokens in all of its input places; when the transition fires, it consumes the required input tokens, and creates tokens in its output places.





A firing is atomic, i.e. a single non-interruptible step.





Add the following set of numbers:





Synchronous Data Flow





Synchronous Dataflow is a restriction of Kahn process networks, where nodes produce and consume a fixed number of data items per firing. This allows static scheduling.





It is used in computing and communications, to optimise data transfer traffic management.





It is different from the asynchronous flow of data, which follows a different structure.





Add the following set of numbers:





Interaction nets





Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic.





Interactions nets are graph-like structures consisting of agents and edges.





An interaction net system is specified by a set of agent types and a set of interaction rules.





Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed.





Interaction nets are at the heart of many implementations of the lambda calculus, such as efficient closed reduction and optimal, in Lévy's sense, Lambdascope.





Add the following set of numbers:





Actor model





The actor model in computer science is a mathematical model of concurrent computation that treats actor as the universal primitive of concurrent computation.





In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization).





The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi.





The actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages.





An actor is a computational entity that, in response to a message it receives, can concurrently:





  • send a finite number of messages to other actors;
  • create a finite number of new actors;
  • Designate the behaviour to be used for the next message it receives.




There is no assumed sequence to the above actions and they could be carried out in parallel.





Decoupling the sender from communications sent was a fundamental advance of the actor model enabling asynchronous communication and control structures as patterns of passing messages.





Recipients of messages are identified by address, sometimes called "mailing address". Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created.





The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order.





================================================================





ALGORITHMS





The word Algorithms means “a process or set of rules to be followed in calculations or other problem-solving operations”.





It is a set of rules/instructions that step-by-step which defines a set of instructions to be executed in a certain order to get the desired output.





Generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language.





Characteristics of an Algorithm





An algorithm should have the following characteristics −





  • Unambiguous − should be clear and unambiguous. Each of its steps (or phases), and their inputs/outputs should be clear and must lead to only one meaning.
  • Input − should have 0 or more well-defined inputs.
  • Output − should have 1 or more well-defined outputs, and should match the desired output.
  • Finiteness − Algorithms must terminate after a finite number of steps. i.e. it should not end up in an infinite loops or similar.
  • Feasible: The algorithm must be simple, generic and practical, such that it can be executed upon will the available resources. It must not contain some future technology, or anything.
  • Language Independent: The Algorithm designed must be language-independent, i.e. it must be just plain instructions that can be implemented in any language, and yet the output will be same, as expected.




Advantages of Algorithms:





  • It is easy to understand.
  • Algorithm is a step-wise representation of a solution to a given problem.
  • In Algorithm the problem is broken down into smaller pieces or steps hence, it is easier for the programmer to convert it into an actual program.




 





Disadvantages of Algorithms:





  • Writing an algorithm takes a long time so it is time-consuming.
  • Branching and Looping statements are difficult to show in Algorithms.




 





Design an Algorithm





No well-defined standards for writing algorithms. It is problem and resource dependent. Algorithms are never written to support a particular programming code.





As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-else), etc.





These common constructs can be used to write an algorithm.





We write algorithms in a step-by-step manner, but it is not always the case.





Algorithm writing is a process and is executed after the problem domain is well-defined. That is, we should know the problem domain, for which we are designing a solution.





Design an algorithm to add two numbers and display the result.





Step 1 − START





Step 2 − declare three integers a, b & c





Step 3 − define values of a & b





Step 4 − add values of a & b





Step 5 − store output of step 4 to c





Step 6 − print c





Step 7 − STOP





Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be written as −





Step 1 − START ADD





Step 2 − get values of a & b





Step 3 − c ← a + b





Step 4 − display c





Step 5 − STOP





Algorithm to add 3 numbers and print their sum: 





  1. START
  2. Declare 3 integer variables num1, num2 and num3.
  3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3 respectively.
  4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
  5. Add the 3 numbers and store the result in the variable sum.
  6. Print the value of variable sum
  7. END




================================================





FLOWCHART





A flowchart is simply a graphical representation of steps or diagram that depicts a process, system or computer algorithm.





It shows steps in sequential order and is widely used in presenting the flow of algorithms, workflow or processes.





They are widely used in multiple fields to document, study, plan, improve and communicate often complex processes in clear, easy-to-understand diagrams. 





Flowcharts are sometimes called by more specialized names such as Process Flowchart, Process Map, Functional Flowchart, Business Process Mapping, Business Process Modeling and Notation (BPMN), or Process Flow Diagram (PFD). They are related to other popular diagrams, such as Data Flow Diagrams (DFDs) and Unified Modeling Language (UML) Activity Diagrams.





 





Plan and draw a basic flowchart





  1. Define your purpose and scope.  You studying the right things with appropriate start and end points to accomplish that purpose. Enough detailed but simple enough in your charting to communicate with your intended audience.
  2. Identify the tasks in chronological order. This might involve talking to participants, observing a process and/or reviewing any existing documentation. You might write out the steps in note form, or begin a rough chart.
  3. Organize them by type and corresponding shape, such as process, decision, data, inputs or outputs.
  4. Draw your chart, either sketching by hand or using a program such as Lucidchart.
  5. Confirm your flowchart, walking through the steps with people who participate in the process. Observe the process to make sure you haven’t missed anything important to your purpose.




 





Types of flowcharts





Different authors describe various types of flowcharts in different terms but most popular flowchart types, framed around the concept of flow controls rather than the flow itself:





  • Document Flowcharts:  showing existing controls over document-flow through the components of a system. … The chart is read from left to right and documents the flow of documents through the various business units.”
  • Data Flowcharts: These show “the controls governing data flows in a system. Than how controls flow.”
  • System Flowcharts: “show the flow of data to and through the major components of a system such as data entry, programs, storage media, processors, and communication networks.”
  • Program Flowcharts: These show the controls placed internally to a program within a system.
  • System Flowchart: Identifies the devices to be used.
  • General Flowchart: Overview.
  • Detailed Flowchart: Increased detail.




Add the following set of numbers:





Building blocks of flowchart






ANSI/ISO ShapeNameDescription
Flowchart symbol: FlowFlowline (Arrowhead)
Shows the process's order of operation. A line coming from one symbol and pointing at another. Arrowheads are added if the flow is not the standard top-to-bottom, left-to right.
Flowchart symbol: TerminatorTerminal Starting / EndingIndicates the beginning and ending of a program or sub-process. Represented as a stadium, oval or rounded (fillet) rectangle. They usually contain the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit inquiry" or "receive product".
Flowchart symbol: ProcessProcessRepresents a set of operations that changes value, form, or location of data. Represented as a rectangle.
Flowchart symbol: DecisionDecisionShows a conditional operation that determines which one of the two paths the program will take. The operation is commonly a yes/no question or true/false test. Represented as a diamond (rhombus).
Flowchart symbol: DataInput/OutputIndicates the process of inputting and outputting data, as in entering data or displaying results. Represented as a rhomboid.
Flowchart symbol: On page referenceOn-page ConnectorPairs of labeled connectors replace long or confusing lines on a flowchart page. Represented by a small circle with a letter inside.
Flowchart symbol: Off page referenceOff-page ConnectorA labeled connector for use when the target is on another page. Represented as a home plate-shaped pentagon.





Add the following set of numbers:





Advantages of Flowchart





  1. Convenient method of communication.
  2. Indicates very clearly just what is being done, where a program has logical complexities.
  3. A key to correct programming.
  4. An important tool for planning and designing a new system.
  5. Clearly indicates the role-played at each level.
  6. Provide conveniences in the purpose of documentation for a system.
  7. Provides an overview of the system and also demonstrates the relationship between various steps.
  8. Facilitates troubleshooting.
  9. Promotes logical accuracy.
  10. Makes sure that no logical path is left incomplete without any action being taken.




 





Disadvantages of Flowchart





  1. A waste of time and slows down the process of software development.
  2. Quite costly to produce and difficult to use and manage.
  3. Not very useful for computer communication.
  4. The Complex logic of the program is quite complicated to draw out on by using different defined shapes. In that case, flowchart becomes complex, clumsy, a pain for the user, resulting in a waste of time and money trying to correct the problem
  5. If you need to modify or alternate the process then it will be very hard to do in the flowchart. Because either you will have to erase the end of the flowchart or start.




================================================================





Programming language





programming language is a formal language comprising a set of instructions that produce various kinds of output.





It is used in computer programming to implement algorithms.





There are programmable machines that use a set of specific instructions, rather than general programming languages.  Ex. such as Jacquard looms, music boxes and player pianos. The programs for these machines (such as a player piano's scrolls) did not produce different behavior in response to different inputs or conditions.





Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it).





The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning).





Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.





A programming language is a notation for writing programs, which are specifications of a computation or algorithm. 





Add the following set of numbers:





A programming language include:





Function and target





computer programming language is a language used to write computer programs, which involves a computer performing some kind of computation or algorithm and possibly control external devices such as printers, disk drives, robots, and so on.





For example, PostScript programs are frequently created by another program to control a computer printer or display.





Generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language. 





In practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way. 





Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.





Abstractions





Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution.





Abstractions is expressed by the abstraction principle which is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.





    Expressive power





ANSI/ISO SQL-92 and Charity are examples of called programming languages.





Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages.





===========================================================





COMPILATION





The compilation is the process the computer takes to convert a high-level programming language into a machine language that the computer can understand. The software which performs this conversion is called a compiler.





The compilation is a process of converting the source code into object code. It is done with the help of the compiler.





The compiler checks the source code for the syntactical or structural errors, and if the source code is error-free, then it generates the object code.









Add the following set of numbers:





The compilation process converts the source code taken as input into the object code or machine code.





The following are the phases through which our program passes before being transformed into an executable form:





  • Preprocessor
  • Compiler
  • Assembler
  • Linker








Add the following set of numbers:





Preprocessor





The source code is the code which is written in a text editor and the source code file is given an extension ex.  ".c".





This source code is first passed to the preprocessor, and then the preprocessor expands this code. After expanding the code, the expanded code is passed to the compiler.





Compiler





The code which is expanded by the preprocessor is passed to the compiler. The compiler converts this code into assembly code.





Assembler





The assembly code is converted into object code by using an assembler. The name of the object file generated by the assembler is the same as the source file. The extension of the object file in DOS is '.obj,' and in UNIX, the extension is 'o'. If the name of the source file is 'hello.c', then the name of the object file would be 'hello.obj'.





Add the following set of numbers:





Linker





Mainly, all the programs written in C use library functions. These library functions are pre-compiled, and the object code of these library files is stored with '.lib' (or '.a') extension.





The main working of the linker is to combine the object code of library files with the object code of our program.





the job of the linker is to link the object code of our program with the object code of the library files and other files.





The output of the linker is the executable file. The name of the executable file is the same as the source file but differs only in their extensions. In DOS, the extension of the executable file is '.exe', and in UNIX, the executable file can be named as 'a.out'.





 





Advantages of compiled languages





Programs that are compiled into native machine code tend to be faster than interpreted code.





This is because the process of translating code at run time adds to the overhead, and can cause the program to be slower overall.





 





Disadvantages of compiled languages





The most notable disadvantages are:





  • Additional time needed to complete the entire compilation step before testing
  • Platform dependence of the generated binary code




 





Advantages of interpreted languages





Interpreted languages tend to be more flexible, and often offer features like dynamic typing and smaller program size. Also, because interpreters execute the source program code themselves, the code itself is platform independent.





Disadvantages of interpreted languages





The most notable disadvantage is typical execution speed compared to compiled languages.





============================================================





Testing and Debugging





Testing and debugging processes in software development are used to improve the quality of the software product and make it error and fault free.





The testing and debugging processes are differentiated by the fact that testing finds the software defects devoiding its correction.





Debugging is a more profound process where the bugs are not only identified but segregated and fixed from the code.





While performing testing, we can use any of its types like unit, integration, and system level methods in order to detect faults. As against, debugging verifies the correctness and performance to identify faults.





 





Comparison Chart





Below is the difference between testing and debugging:





Add the following set of numbers:





TESTINGDEBUGGING
The process to find bugs and errors.The process to correct the bugs found during testing.
To identify the failure of implemented code.To give the absolution to code failure.
Display of errors.Display a deductive process.
Done by the tester.Done by either programmer or developer.
No need of design knowledge in the testing process.NOT done without proper design knowledge.
Done by insider as well as outsider.Done only by insider. Outsider can’t do debugging.
Manual or automated.Always manual. Debugging can’t be automated.
Based on different testing levels i.e. unit testing, integration testing, system testing etc.Based on different types of bugs.
Stage of software development life cycle (SDLC).Not an aspect of software development life cycle, it occurs as a consequence of testing.
Composed of validation and verification of software.Search for to match symptom with cause, by that it leads to the error correction.
Initiated after the code is written.Works with the execution of a test case.




Add the following set of numbers:





Definition of Testing





Testing in software engineering refers to test the program code, which comes after the coding phase and before the deployment phase in the software development life cycle.





The aim of the software project is to reduce and prevent defects; the testing process alone isn’t enough for the quality of the software.





The testing is performed for discovering the defects in the systems.





The defects can occur in any of the phases in software development, which must be identified as close to the time of insertion as possible and not wait until the testing of programs. So, opposed to this if every phase is tested isolately as and when the phase is accomplished the defects can be detected early, hence reducing the overall cost.





Well-timed testing raises the possibility of a product or service meets the customer’s requirements.





Testing can also be explained by the below-given equation.





Software Testing = Software Verification + Software Validation





 





Types of testing





  • Positive testing: The major work of positive testing is to confirm that the developed product is working or behaving as it is intended to do.
  • Negative testing: It ensures the reliability and non-failure of the product even if the unexpected inputs are inserted in the product.




 





Need of testing





  1. Technical Case– It is hard to predict the implications of the requirement and the behaviour of the system from its components. The bugs found in the languages, user interfaces, operating systems and databases can result in application failure.
  2. Business Case– If the identification of bugs is not done in the development phase it will end up by creating problems at the customers’ end. Softwares with bugs lowers the reputation, operations and sales.
  3. Professional Case– Designing a test case is a hard and challenging task.
  4. For Verification and Validation– Testing serves as the major tools and metrics for verification and validation.
  5. For Reliability Estimation– Software reliability estimation can be done through testing where the testing behaves as a statistical sampling method to obtain the rate of failure.




 





Definition of Debugging





Testing and debugging works in a cyclic manner where testing finds the error and debugging eliminates it, as mentioned above.





Debugging is not testing, but it is carried out as a consequence of testing. It starts with the execution of the test case. When we debug a program, the two given possibilities can arise, first where the cause of the error will be identified, corrected and removed. In the second case, the cause will not be found and rectified.





Debugging focuses on





  • Before performing the debugging, it has to be assured that the individuals involved in the debugging must understand all of the causes of the errors.
  • Experimentation should not be allowed during the debugging as it could result in the addition of news errors in it.
  • If one error is detected in on a portion of a program, this is highly possible that the program could contain more errors. So, it needed to be thoroughly checked.
  • The modified code in the program is required to be correct and accurate. Regression testing is performed for fulfilling the given purpose.




Debugging Steps





  1. Identify the errors.
  2. Design and plot the error report.
  3. Analyze the errors.
  4. Debugging tools are utilized.
  5. Fix the errors.
  6. Retest the software.




 





Differences Between Testing and Debugging





  1. Testing involves the execution of the program with the purpose of finding faults. On the other hand, debugging is the process of locating and correcting errors.
  2. Debugging is not a part of the SDLC cycle, in fact, it occurs as a consequence of testing. In contrast, testing is included as a phase in SDLC (Software Development Life Cycle).
  3. Debugging initiates with the execution of a test case whereas testing starts just by writing code.
  4. Testing contains two or more activities – validation and verification of the software. Inversely, debugging tries to match symptom with cause, therefore leading the error correction.




===============================================================





Documentation





Documentation is any communicable material that is used to describe, explain or instruct regarding some attributes of an object, system or procedure, such as its parts, assembly, installation, maintenance and use. 





Documentation can be provided on paper, online, or on digital or analog media, such as audio tape or CDs.





Documentation as a set of instructional materials shouldn't be confused with documentation science, the study of the recording and retrieval of information.





Any written text, illustrations or video that describe a software or program to its users is called program or software document.





User can be anyone from a programmer, system analyst and administrator to end user.





At various stages of development multiple documents may be created for different users.





Documentation in software engineering is all written documents and materials dealing with a software product’s development and use.





All software development products, whether created by a small team or a large corporation, require some related documentation. And different types of documents are created through the whole software development lifecycle (SDLC).





Documentation exists to explain product functionality, unify project-related information, and allow for discussing all significant questions arising between stakeholders and developers.





In modular programming documentation becomes even more important because different modules of the software are developed by different teams. If anyone other than the development team wants to or needs to understand a module, good and detailed documentation will make the task easier.





These are some guidelines for creating the documents −





  • Documentation should be from the point of view of the reader
  • Document should be unambiguous
  • There should be no repetition
  • Industry standards should be used
  • Documents should always be updated
  • Any outdated document should be phased out after due recording of the phase out




Advantages of Documentation





These are some of the advantages of providing program documentation −





  • Keeps track of all parts of a software or program
  • Maintenance is easier
  • Programmers other than the developer can understand all aspects of software
  • Improves overall quality of the software
  • Assists in user training
  • Ensures knowledge de-centralization, cutting costs and effort if people leave the system abruptly




Types of documentation





The main goal of effective documentation is to ensure that developers and stakeholders are headed in the same direction to accomplish the objectives of the project.









All software documentation can be divided into two main categories:





  • Product documentation
  • Process documentation




Product documentation describes the product that is being developed and provides instructions on how to perform various tasks with it. Product documentation can be broken down into:





  • System documentation and
  • User documentation




System documentation represents documents that describe the system itself and its parts. It includes requirements documents, design decisions, architecture descriptions, program source code, and help guides.





User documentation covers manuals that are mainly prepared for end-users of the product and system administrators. User documentation includes tutorials, user guides, troubleshooting manuals, installation, and reference manuals.





Process documentation represents all documents produced during development and maintenance that describe… well, process. The common examples of process documentation are project plans, test schedules, reports, standards, meeting notes, or even business correspondence.





The main difference between process and product documentation is that the first one record the process of development and the second one describes the product that is being developed.


Post a Comment

0 Comments