Skip to main content

Notes


  • Object may have several interfaces, each of which is a view point on the methods that it provides.
  • Software process: related activities that leads to the production of the software
    • specification functionality and constraints are defined
    • development
    • validation validate to ensure that software does what customer wants
    • evolution evolve to meet changing customers needs.
  • plan-driven process activities are planed in advance, progress is measured against this plan.
  • agile process planing is incremental,
    • requirements are developed incrementally according to user priorities.
    • move towards a solution in a series of steps, backtracking when realizing a mistake
  • Component is a collection of objects that operate together to provide related functions and services.
  • Pattern is a way of reusing the knowledge and experience of other
    • is a description of the problem and the essence of its solution.
    • a well tried solution to a common problem
  • Frameworks are often implementation of design patterns
    • inversion of control, framework objects, not application specific objects, are responsible for control in the system.
    • framework objects invoke "callback method" that is linked to user-provided functionality.
    • framework rely on inheritance and polymorphism.
  • COTS "commercial off the shelf" is a software system that can be adapted to the need of different customers without changing the source code of the system "just extensive configuration".
ie: read from software engineer sommerville.

Comments

Popular posts from this blog

The post-office & the postman

If we were to talk about old messaging system where there exist post-office, postman & mailbox. each component had its own functionality that we looked for when trying to visualize how those component where to interact in a computerized version. Simple scenario: Mail is added in mail box Postman arrive pick mails from his area mailboxes and take them to the post-office. Post-office organize mails by areas. Postman takes mails related to his area "distribute it in mailboxes". A person can go to post-office and  pick his own mail "in case of failure or wishes for early delivery". Mapping in a computerized version: Scenario: Observer design pattern which can use push or pull scenario, to inform those whom are registered for an event about its occurrence. Component: Post-Office = Message-Broker Post-Office-Box = Message-Storage-Validity Mailbox = Topic/Queue Postman !!! where's the postman ? Apache kafka act as a message broker which d

Container Storage

When traveling, we always think of our traveling bag. How our luggage will fit in it, yet have you ever thought, How your bag will be stored or transferred. I think data should be thought of same way. How it will be stored and how it will be retrieved. also sizing would matter, How much size data will occupy. Would it fit in one server or will we need a cluster. Many question would jump in your mind, yet storing and retrieving is the most general one. And so we'll move to ask SQL or NoSQL, I'd go for SQL in-case of having multiple entities communicating with each other, having multiple relation between one another. Yet if what I need is a storage unit, then NoSQL would be my choice. Have a look on the CAP theorem and how each of consistency, availability and partition tolerance will be satisfied using the DB engine you choose

Big data OverView

Could you define Beauty ? So is Big Data, it is itself a definition. you could ask what is its characteristics. Big data has n Vs dimension, where n often changes. Laney (2001) suggested that Volume, Variety, and Velocity as 3 Vs, then IBM added Veracity "realism" as the fourth V, later Oracle introduced Value. So how would we process this Big Data. I use hadoop & wish to learn spark. Hadoop is an opensource framework used for analyzing big chunk of data, its divide to 2 modules. map-reduce module and a file system module "HDFS". hadoop divide data to small chunk, start processing each chunk on its own, then start combining each chunk again "divide and conquer principle we used to do in merge sort", each chunk need a core & memory to run on. as a start I need to define location of my data, where would my data reside data would reside hadoop file system (HDFS) fs.defaultFS : hdfs://rserver:9000/ then I define my resources " number o