Just look! The day's almost as beautiful as you!

Month: October 2014

Overview Peterson

Overview Peterson

 

3.3 ROUTING

Dari chapter sebelumnya, diketahui bahwa forwarding dan routing berbeda. Forwarding terdiri dari mengambil paket data, melihat address tujuan, membuat tabel dan mengirim paket pada jalur yang ditentukan oleh tabel. Sementara routing adalah proses dimana tabel forwarding dibuat.

3.3.1 Network As A Graph

Masalah dasar pada routing yaitu menemukan jalan yang memiliki cost terendah diantara yang lain. Kelemahan dari pendekatan ini yaitu

  • tidak mendeteksi node atau link failure

  • tidak mempertimbangkan kemungkinan node atau link baru

  • berasumsi cost edge tidak bisa diganti

Karena kekurangan tersebut, routing digunakan agar dapat memakai protokol dinamik yang dapat menyelesaikan masalah diatas dan menemukan jalan dengan cost terendah

3.3.2 Distance Vector (RIP)

 

3.3.3 Link State (OSPF)

Asumsi awal dari link state ini sama dengan distance vector. Setiap node diasumsikan mengetahui apakah keadaan link tujuan sedang up atau down. Ide dasar dari link state yaitu setiap node mengetahui bagaimana cara mencapai tujuan secara langsung dan yakin setiap node bisa mengetahui peta lengkap dari network. Link state routing berpaku pada dua mekanisme, yaitu penyebaran informasi link state dan kalkulasi rute dari total link state.

Reliable Flooding

Reliable flooding adalah proses yang meyakinkan setiap node mendapat informasi link state dari node lainnya. Proses pengiriman informasi ini dilakukan sampai setiap node mendapatkan informasi dari link state. Setiap node membuat link-state packet yang berisi :

  • ID node yang membuat LSP

  • List node yang terhubung dengan node pembuat dengan cost masing-masing

  • Nomor urut

  • Waktu untuk menghidupkan LSP

ID dan list node digunakan untuk kalkulasi rute, sedangkan nomor urut dan waktu digunakan untuk membuat reliable flooding lebih baik.

Route Calculation

The Open Shortest Path First Protocol

fitur

  • Authentication of Routing Message

  • Additional Hierarchy

  • Load Balancing

3.3.4 Metriks

3.4 IMPLEMENTASI DAN KINERJA
3.4.1 Switch Basics

a.jpg

Dari gambar diatas dapat dilihat prosesor dengan 3 interface jaringan yang digunakan sebagai switch. angka ini menunjukkan jalan yang diambil oleh paket ketika tiba pada interface 1 sampai keluar pada interface 2. dapat dilihat bahwa prosesor memiliki mekanisme untuk memindahkan data langsung dari interface ke memori utama tanpa harus melalui CPU. Setelah paket tersebut sampai didalam memori, CPU memeriksa header untuk menenturkan interface yang akan dilalui paket ketika akan keluar. dengan menggunakan Direct Memory Access, paket dipindahkan keluar melalui interface yang sesuai.

 

3.4.2 Ports

Kebanyakan switch secara konsep terlihat mirip, terdiri dari sejumlah input dan output port dan fabric. Setidaknya satu prosesor kontrol yang bertanggung jawab atas seluruh switch yang berkomunikasi melalui port secara langsung maupun tidak langsung seperti melalui switch fabric. Port berkomunikasi dengan dunia luar. Mereka mungkin menerima serat optik dan laser, buffer menahan paket yang menunggu untuk di switch atau dikirimkan, dan jumlah yang signifikan dari sirkuit lain yang memungkinkan beralih ke fungsi.

 

3.4.3 Fabrics

switch fabric harus dapat memindahkan paket dari port input ke port output dengan delay minimal dan dengan cara yang memenuhi throughput tujuan dari switch. Itu berarti fabric menampilkan beberapa derajat paralelisme. Sebuah fabric berkinerja tinggi dengan n port sering memindahkan satu paket dari masing-masing port n untuk salah satu port output pada saat yang sama.

 

3.4.4 Router Implementation

5.1 Simple Demultiplexer (UDP)

 

Protocol transport yang paling simpel adalah yang memperpanjang layanan pengantaran dari host-to-host dari network utama menjadi layanan komunikasi process-to-process. Biasanya terdapat banyak proses yang dijalankan di tiap host jadi protokol perlu menampahkan tingkatan dari demultiplexing yang berarti memperbolehkan tiap host menjalankan proses yang terdiri dari banyak aplikasi untuk membagi network. UDP (User Datagram Protocol) adalah salah satu contoh protokolnya.

Topik yang paling menarik dari protokol ini adalah bentuk alamat yang digunakan untuk mengidentifikasi target process. Walaupun memungkinkan bagi proses-proses untuk mengidentifikasi satu sama lain secara langsung dengan process id (pid) yang telah di-assign oleh OS, pendekatan ini bisa dilakukan di sistem terdistribusi tertutup yang hanya ada satu OS yang berjalan di semua host dan assign setiap proses dengan id yang unik. Pendekatan yang lebih umum yang digunakan UDP adalah porses-proses yang secara tidak langsung mengidentifikasi satu sama lain menggunakan abstract locater, yang biasa disebut port. Ide dasar adalah sumber proses mengirimkan pesan ke port dan proses tujuan menerima pesan dari port.

Topik berikutnya adalah bagaimana proses mengajarkan port mengenai proses dari mengirimkan pesan tersebut. Biasanya, proses klien menginisiasi pertukaran pesan dengan proses server. Saat klien mengontak server, server mengetahui port dari klien and bisa membalasnya. Masalah yang sebenarnya adalah bagaimana klien mengetahui port dari server. Pendekatan umum adalah server menerima pesan-pesan di port yang sudah dikenalnya. Setiap server menerima pesan-pesannya di port yang tetap yang dipublikasikan secara luas. Contohnya di internet, DNS menerima pesan-pesan di port yang sudah dikenal yaitu port 53 di setiap host. pesan mendengar pesan di port 25, dan Talk program dari Unix menerima pesan di port 517, dan lain-lain. Mapping ini dipublikasikan secara periodik di RFC dan tersedia di sebagian besar sistem Unix di file /etc/services. Terkadang port yang sudah dikenal hanya sebagai titik awal dari komunikasi: Klien dan server menggunakan port yang telah dikenal untuk kemudian menyetujui bahwa beberapa port yang lain akan menggunakan komunikasi berikutnya, meninggalkan port yang sudah dikenal bebas digunakan oleh klien-klien yang lain.

 

Strategi alternatif adalah untuk menggeneralisasi ide ini, sehingga hanya ada satu port yang dikenal, yaitu di tempat port mapper menerima pesan-pesannya. Klien akan mengirimkan pesannya ke port yang sudah dikenal oleh port mapper, memerintahkan port untuk berbicara ke servis apapun, dan port mapper mengembalikan port yang cocok. Strategi ini memudahkan bagi perubahan port yang berhubungan dengan servis yang berbeda setiap waktu dan agar setiap host menggunakan port yang berbeda untuk servis yang sama.

Seperti yang telah disebutkan, port hanyalah sebuah abstraksi. Bagaimana ia diimplementasikan secara tepat berbeda dari sistem yang satu dengan sistem lainnya, atau lebih tepatnya, dari OS yang satu dengan OS lainnya. Sebagai contoh, socket API yang dideskripsikan di Chapter 1 adalah contoh dari implementasi dari port-port. Biasanya, port diimplementasikan dengan pesan yang mengantri, seperti digambarkan Figure 5.2. Saat pesan masuk, protokol (contohnya UDP) menambahkan pesan tersebut ke akhir antrian. Jika antrian penuh, pesan akan dibuang.

 

 

Walaupun UDP tidak mengimplementasikan pengaturan arus atau pengiriman yang terpercaya, ia menyediakan satu atau lebih fungsi selain dari pesan demultiplexing ke beberapa proses aplikasi-ia juga memastikan keakuratan dari pesan dengan menggunakan checksum. Tapi data masukan yang digunakan checksum agak berbeda dari kebanyakan.

UDP checksum mengambil UDP header sebagai input masukan, isi dari pesan utama, dan sesuatu yang disebut pseudoheader. Pseudoheader terdiri dari 3 bagian dari IP header-nomor protokol, ip adress sumber, dan ip adress tujuan-ditambah UDP length field. Tujuan dari adanya pseudoheader adalah untuk memverifikasi bahwa pesan yang dikirim adalah antara dua endpoint yang benar.

Thread

DEFINITION

A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler (typically as part of an operating system). The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is a component of a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources. In particular, the threads of a process share the latter’s instructions (its code) and its context (the values that its variables reference at any given moment).

On a single processor, multithreading is generally implemented by time-division multiplexing (as in multitasking): the processor (CPU) switches between different hardware threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, threads can be truly concurrent, with every processor or core executing a separate thread simultaneously. The operating system uses hardware threads to implement multi processing. Hardware threads are different from the software threads mentioned earlier. Software threads are a pure software construct. The CPU has no notion of software threads, and is unaware of their existence.

Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.

Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad hoc time-slicing.

 

DIFFERENCES BETWEEN THREAD AND PROCESS

Threads differ from traditional multitasking operating system processes in that:

  • processes are typically independent, while threads exist as subsets of a process

  • processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources

  • processes have separate address spaces, whereas threads share their address space

  • processes interact only through system-provided inter-process communication mechanisms

  • context switching between threads in the same process is typically faster than context switching between processes.

Systems such as Windows NT and OS/2 are said to have “cheap” threads and “expensive” processes; in other operating systems there is not so great a difference except the cost of address space switch which implies a TLB flush.

MULTITHREADING

Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process’s resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to a single process to enable parallel execution on a multiprocessing system.

Multi-threaded applications have the following advantages:

  • Responsiveness: Multi-threading has the ability for an application to remain responsive to input. In a single-threaded program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for gaining similar results.

  • Faster Execution: This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple or multi-core CPUs, or across a cluster of machines, because the threads of the program naturally lend themselves to truly concurrent execution.

  • Less Resource Intensive: Using threads, an application can serve multiple clients concurrently using less resource than it would need when using multiple process copies of itself. For example, the Apache HTTP server, which uses a pool of listener and server threads for listening to incoming requests and processing these requests.

  • Better System Utilization: Multi-threaded applications can also utilize the system better. For example, a file-system using multiple threads can achieve higher throughput and lower latency since data in faster mediums like the cache can be delivered earlier while waiting for a slower medium to retrieve the data.

  • Simplified Sharing and Communication: Unlike processes, which require message passing or shared memory to perform inter-process communication, communication between threads is very simple. Threads automatically share the data, code and files and so, communication is vastly simplified.

  • Parallelization: Applications looking to utilize multi-core and multi-CPU systems can use multi-threading to split data and tasks into parallel sub-tasks and let the underlying architecture manage how the threads run, either concurrently on a single core or in parallel on multiple cores. GPU computing environments like CUDA and OpenCL use the multi-threading model where dozens to hundreds of threads run in parallel on a large number of cores.

Multi-threading has the following drawbacks:

  • Synchronization: Since threads share the same address space, the programmer must be careful to avoid race conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

  • Thread crashes Process: An illegal operation performed by a thread crashes the entire process and so, one misbehaving thread can disrupt the processing of all the other threads in the application.

Issues with multithreading

There are few known issues with multithreading

  1. when a thread execute a fork system call to create a new proccess, does the child process duplicate all thread from the parent or just the thread ?

  2. how does a multithread process handle its signal ? is a signal delivered to a process receive by any thread, a few particular thread or all of thr thread ?

  3. how thread are scheduled ? at what level they scheduled, the user level or the kernel level ?

Operating systems schedule threads in one of two ways:

  1. Preemptive multitasking is generally considered the superior approach, as it allows the operating system to determine when a context switch should occur. The disadvantage of preemptive multithreading is that the system may make a context switch at an inappropriate time, causing lock convoy, priority inversion or other negative effects, which may be avoided by cooperative multithreading.

  2. Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available.

Threads, called tasks, made an early appearance in OS/360 Multiprogramming with a Variable Number of Tasks (MVT) in 1967.

Until the late 1990s, CPUs in desktop computers did not have much support for multithreading, although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches. Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithreading by decreasing the thread-switch time, perhaps by allocating a dedicated register file for each thread instead of saving/restoring a common register file. In the late 1990s, the idea of executing instructions from multiple threads simultaneously, known as simultaneous multithreading, had reached desktops with Intel’s Pentium 4 processor, under the name hyper-threading. It has been dropped from Intel Core and Core 2 architectures, but later was re-instated in the Core i7 architectures and some Core i3 and Core i5 CPUs.

 

THREAD MODELS

 

1:1 (Kernel-level threading)

Threads created by the user are in 1-1 correspondence with schedulable entities in the kernel. This is the simplest possible threading implementation. Win32 used this approach from the start. On Linux, the usual C library implements this approach (via the NPTL or older LinuxThreads). The same approach is used by Solaris, NetBSD and FreeBSD.

N:1 (User-level threading)

An N:1 model implies that all application-level threads map to a single kernel-level scheduled entity; the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks however is that it cannot benefit from the hardware acceleration on multi-threaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be utilized. The GNU Portable Threads uses User-level threading, as does State Threads.

M:N (Hybrid threading)

M:N maps some M number of application threads onto some N number of kernel entities, or “virtual processors.” This is a compromise between kernel-level (“1:1”) and user-level (“N:1”) threading. In general, “M:N” threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.

Hybrid implementation examples

  • Scheduler activations used by the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model)

  • Marcel from the PM2 project.

  • The OS for the Tera/Cray MTA

  • Microsoft Windows 7

  • The Haskell compiler GHC uses lightweight threads which are scheduled on operating system threads.

Fiber implementation examples

Fibers can be implemented without operating system support, although some operating systems or libraries provide explicit support for them.

  • Win32 supplies a fiber API (Windows NT 3.51 SP3 and later)

  • Ruby as Green threads

  • Netscape Portable Runtime (includes a user-space fibers implementation)

  • ribs2

 

References

  1. Pat Villani: Advanced WIN32 Programming: Files, Threads, and Process Synchronization, Harpercollins Publishers

  2. Bill Lewis: Threads Primer: A Guide to Multithreaded Programming, Prentice Hall

  3. http://www.dmoz.org//Computers/Programming/Threads/

  4. http://www.futurechips.org/tips-for-power-coders/parallel-programming.html

  5. Sibsankar Haldar, Alex Alagarsamy Aravind: Operating System

© 2024 Wahdiat's blog

Theme by Anders NorenUp ↑