Nicolas KUHN le jeudi 21 novembre 2013
|
|
|
- Jean-Jacques Paquin
- il y a 10 ans
- Total affichages :
Transcription
1 Institut Supérieur de l Aéronautique et de l Espace(ISAE) Nicolas KUHN le jeudi 21 novembre 2013 Interactions inter-couches et liens à long délai et discipline ou spécialité ED MITT: Réseaux, télécom, système et architecture Équipe d'accueil ISAE-ONERA MOIS M. Jérôme LACAN(directeur de thèse) M. Emmanuel LOCHIN(co-directeur de thèse) Jury : M. Jean-Jacques PANSIOT- Professeur d'université- Rapporteur M. Thierry TURLETTI- Docteur- Rapporteur M. Patrick GELARD- Ingénieur-Chercheur- Examinateur M.DavidROS-Docteur-Examinateur M. Jérôme LACAN- Enseignant-Chercheur ISAE- directeur de thèse M. Emmanuel LOCHIN- Enseignant-Chercheur ISAE- co-directeur de thèse
2 ii
3 Acknowledgements, Remerciements Ce manuscrit est le résultat de trois années d études et d investissement personnel. Mais une thèse n aboutit que rarement quand le doctorant est seul. Je tiens donc particulièrement à remercier mes directeurs de thèse toulousains, Emmanuel Lochin et Jérôme Lacan, de l ISAE, et ma co-directrice de thèse du NICTA, Roksana Boreli. Sans leurs aides, leurs questions pertinentes et discussions intéressantes, je n aurais probablement pas pu fournir l ensemble des résultats présentés dans ce manuscrit de thèse. Sans directeurs de thèse, il n y a pas de thèse. Je remercie Jérôme de m avoir sorti de la couche physique où j errais en fin d école d ingénieur, et pour m avoir élevé vers le monde des couches hautes de la couche OSI. Mes directeurs ont toujours été là, pour que mon environnement de travail soit agréable, et pour préparer mon avenir de jeune chercheur. Bref, Merci. Une aventure à suivre, je l espère (mais dans un autre contexte qu une thèse!). J ai eu la chance de travailler avec mes directeurs de thèse pour une mission de R&T pour le Centre National d Etudes Spatiales (CNES), en collaboration avec Nicolas Van Wambeke et Mathieu Gineste de Thales Aleania Space (TAS). Nous avons ensemble contribué à la réussite d une mission de R&T pour Laurence Clarac, Caroline Bes et Emmanuel Dubois représentant le CNES. Je tiens donc à remercier l ensemble des équipes, avec qui j ai eu l occasion de parfaire mon éducation scientifique et mon aptitude à appréhender les difficultés liées à la réalisation d un projet. Entre deux lignes de codes et lectures de traces NS-2, discuter de tout et de rien permet un retour au travail avec un esprit reposé. Je tiens à remercier l ensemble de l équipe du Département Mathématique Informatique et Automatique (DMIA) de la formation ENSICA de l ISAE. Je ne souhaite pas faire de listes exhaustives des personnes que j ai eu la chance de cotoyer dans les deux laboratoires (de peur d oublier une personne), mais j espère sincèrement pouvoir continuer à collaborer avec un maximum d entre eux. Les techniques qu Olivier m a transmises (INTELLIPHONE!), la motivation sans limite de Yan et les pauses chocolats avec Guillaume m ont beaucoup aidé lors de mon séjour au NICTA. Je remercie particulièrement ma famille, mes amis et Laura, dont les soutiens m ont permis de continuer pendant les moments où ma productivité était mise à l épreuve. Je remercie infiniment mes directeurs de thèse, et l ensemble des personnes que j ai pu rencontrer durant les diverses réflexions qui ont permis la rédaction de ce manuscrit de thèse. iii
4 iv
5 Abstract Network providers offer services in line with users requests, even though the challenges introduced by their mobility and the download of large content are crucial. Mobile video streaming applications are delay sensitive and the increasing demand for this service legitimate extensive studies evaluating transmission delays. On top of physical transmission delays, accessing a resource or recovering data from lower layers should not be neglected. Indeed, recovery schemes or channel access strategies variously introduce end-to-end delays. This document argues that those cross-layer effects should be explored to minimize the transmission delays and optimize the use of network resources. Also, understanding the impacts of low layers protocols on the end-to-end transmission will enable better dimensioning of the network and adapt the traffic carried on. In the context of satellite 4G links, we measure the impact of link layer retransmission schemes on the performance of various transport layer protocols. We develop Trace Manager Tool (TMT) and Cross Layer InFormation Tool (CLIFT) to lead realistic cross layer simulations in NS-2. We show that, for all target TCP variants, when the throughput of the transport protocol is close to the channel capacity, using the ARQ mechanism is most beneficial for TCP performance improvement. In conditions where the physical channel error rate is high, Hybrid-ARQ results in better performance. In the last specifications for DVB-RCS2, two access schemes (random and dedicated) are presented and can be implemented to manage the way home users access to the satellite link for Web browsing or data transmission. We developed Physical Channel Access (PCA) that models in NS-2 the behaviour of those link layer level access methods. We measure that, even though dedicated access methods can transmit more information data, random access methods enable a faster transmission of short flows. Based on these results, we propose to mix random and dedicated access methods depending on the dynamic load of the network and the sequence number of the TCP segments. As a potential exploitation of cross layer information, we explore the feasibility to introduce low priority traffic on long delay path. The rationale is to grab the unused 4G satellite links capacity to carry non-commercial traffic. We show that this is achievable with LED- BAT. However, depending on the fluctuation of the load, performance improvements could be obtained by properly setting the target value. v
6 vi
7 Contents 1 Synthèse en français Introduction Contexte Contribution Notations Organisation G par satellite: mécanismes de retransmission de la couche liaison et TCP Définition d un réseau 4G et des protocoles évalués Fiabilisation de la couche liaison Fiabilisation au niveau transport Cross Layer InFormation Tool (CLIFT) Scénario Résultats Discussion Lien retour DVB-RCS2 : TCP et méthodes d accès Méthodes d accès au canal de DVB-RCS Physical Channel Access (PCA), un module pour NS Dimensionnement de la trame Utilisation du canal avec les méthodes à accès dédié Temps de transmission et flots courts Une solution potentielle: mixer les accès aléatoire et dédié Discussion Introduction d un trafic non perturbateur sur lien satellite avec le contrôle de congestion Low Extra Delay Background Transport (LEDBAT) vii
8 1.4.1 Le protocole LEDBAT LEDBAT pour transmission sur lien 4G satellite LEDBAT sur réseau long délai chargé Discussion Conclusion Introduction Context Contribution and organization Chapter 4: Link layer reliability schemes and TCP for 4G Chapter 5: Channel access methods and TCP for DVB-RCS Chapter 6: Exploit queuing delays to introduce less-than-best-effort traffic on satellite path State of the art Protocol stack OSI Model of a protocol stack Error-control codes Router buffering, queue management and congestion Transport layer congestion control protocols On the need for cross-layer simulations in the context of high BDP paths Evaluation of the performance of satellite networks Satellite networks and testbeds Network simulators On the difficulty to simulate the protocol stack Sublayer fragmentation in NS Link layer reliability schemes and TCP for 4G networks From 3.9G to 4G MAC/PHY considerations Simulate the impact of PHY/MAC on transport layers Impact of MAC layer on transport layer performance Solutions to improve the performance of TCP Benefits of our approach: realistic cross-layer measurements viii
9 3.4 On the need for evaluations of the impact of channel access methods on TCP for DVB-RCS2 links Notations Channel access on the return link Existing proposals to simulate the DVB-RCS2 link Existing studies on the impact of access methods on TCP Benefits of our approach: flexible access methods Router queuing delays and Less-than-Best-Efforts (LBE) traffic on satellite links Using extra bandwidth for traffic with LEDBAT on satellite links LEDBAT Algorithm Realistic cross-layer evaluations in 4G : on the impact of link layer reliability schemes on the performance of TCP Cross-Layer Information Tool (CLIFT): link layer reliability schemes on physical layer trace and integration in NS CLIFT main internal components Physical layer traces format Trace Manager Tool (TMT) and link layer reliability schemes NS-2 module inside CLIFT Tcl scripts for CLIFT Limits and extendability of CLIFT Physical layer for 4G satellite links Distribution scenario Interleaved Internet scenario Non Interleaved Internet Scenario Discussion Channel access methods and TCP: on the choice of a channel access method for the home users of the return satellite channel of DVB Physical Channel Access (PCA): modeling diverse link layer channel access methods in NS Model the access NS-2 module implementation details ix
10 5.1.3 Tcl scripts for PCA Limits and extendability of PCA Access methods Parameters Access methods Enabling random access methods for data traffic Problem presentation Traffic generation Throughput and datagram loss rate Discussion Transmission times of short flows TCP sessions HTTP traffic with Packmime Short flows and errors Mixing random and dedicated access methods Discussion Leveraging queuing delays to introduce less-than-best-effort traffic on satellite path LEDBAT versus TCP Vegas for LBE transmissions LEDBAT over a 4G Satellite Network G Satellite Network Configuration Simulation Results LEDBAT performance in a loaded satellite network Network configuration Presentation of the results: few users in the network Presentation of the results: fully loaded network Discussion Conclusion 144 A List of Publications 148 Bibliography 152 x
11 xi
12 xii
13 List of Figures 1.1 Validation croisée de TMT et du modèle Fonctions enque() et deque() dans NS Débit de différentes versions de TCP en fonction du mécanisme au niveau de la couche liaison (voie montante) Impact de HARQ et ARQ quand Es/N0 décroît sur les performances de TCP Mesure des bénéfices de l entrelacement au niveau de la couche physique Taux d erreur paquet pour les méthodes d accès aléatoire à 5 db Allocation de capacité: enque() et deque() Nombre moyen de datagrammes transmis en 20 s Nombre moyen de datagrammes perdus en 20 s Existence de N U Réception des segments TCP Temps de réception moyenné Switch entre méthodes d accès aléatoire et dédié Architecture du réseau Partage de la capacité sans le Groupe A (charge du réseau peu élevée) Partage de la capacité avec le Groupe A (charge du réseau élevée) Upper and lower layers: illustration of fragmentation and recovery time One example of FEC, ARQ and HARQ Queuing time and buffer overflow enque() and deque() methods in NS Times Frequency block description Structure of software Physical layer traces: transmission and decoding times xiii
14 4.3 An overview of TMT states Markov chain Bursty errors and bursty erasure models Validation of efficiency throughput Validation of recovery delay Illustration of TMT Two datagrams sharing channel in NS Adaptation of the transmission date of the datagram Performance of physical layer codes in LTE context Throughput of different versions of TCP depending on link layer retransmission schemes (UP) Throughput of different versions of TCP depending on link layer retransmission schemes (DOWN) Transport layer performance: impact of HARQ-II and ARQ when Es/N 0 decrease Transport layer performance, without physical layer interleaving Capacity allocation: enque() and deque() enque() method flowchart adaptbitnextframe() function flowchart, used in the deque() method Packet loss rate for random access methods at 5dB Average number of datagrams sent per TCP sessions in 20 s Average number of datagrams lost per TCP sessions in 20 s Transmission efficiency Illustration of N U Evolution of TCP segment sequence number reception Cumulated reception time Datagrams errors and short flows Switch from random to dedicated access Network architecture Capacity sharing depending on the target value and the number of flows (without Group A) Capacity sharing depending on the target value and the number of flows (with Group A) xiv
15 xv
16 xvi
17 List of Tables 1.1 LEDBAT sur 4G Satellite Génération du trafic OSI Model Different versions of TCP Random access method performance Distribution scenario: transport layer retransmissions with CUBIC Average delay of IP packets (UP) Time needed to transmit 0.1 Mb Use case simulation parameters Transmission times of 30 kb HTTP request transmission times Retransmission probabilities Comparison of LEDBAT and Vegas fairness to CUBIC LEDBAT over 4G Satellite Simulation Parameters xvii
18 xviii
19 Chapter 1 Synthèse en français 1
20 1.1 Introduction Contexte L Internet offre un service dit au mieux et supporte une augmentation croissantes des applications multimedia telles le video streaming, la VoIP ou les vidéo-conférences. Ces applications ont une forte contrainte en délai et l augmentation du nombre d usagers utilisant ce service légitime les études étendues qui tendent à évaluer le temps de transmission de l information. En effet, en plus du temps de transmission au niveau physique, il faut compter le délai d accès à la ressource et le délai nécessaire à la récupération de l information venant des couches inférieures. Les satellites prennent une part toujours plus importante dans les réseaux, comme dans le projet BATS 1, le délai de transmission de l information transportée sur ces liens est bien plus important que si le lien était filaire. En fonction de la technologie employée pour transmettre l information, des mécanismes de fiabilisation et des stratégies d accès au canal, le délai de transmission de bout-en-bout varie fortement. Bien que la pile protocolaire OSI ait été contestée, la constante évolution des demandes d accès à Internet impose que les modifications soient intégrées dynamiquement pour convenir aux besoins des utilisateurs. Implémenter une architecture différente serait, à ce jour, trop cher et les nouveaux besoins imposent une amélioration des protocoles au niveau de chacune des couches et une meilleure communication entre elles. Ce document insiste sur le fait que les impacts des protocoles des différentes couches doivent être étudiés avec attention dans le contexte des liens à long délai. Comprendre l impact des protocoles des couches basses sur la transmission de bout-en-bout permet un meilleur dimensionnement du réseau, une adaptation du trafic émis, ou l introduction d un service à faible priorité Contribution Dans cette thèse, nous avons mesuré l impact des mécanismes des couches liaison et réseau sur les performances de divers protocoles de congestion de la couche transport. Dans le contexte des liens 4G par satellite, nous proposons un ensemble d outils, Trace Manager Tool (TMT) et Cross Layer InFormation Tool (CLIFT), pour simuler de manière 1 2
21 réaliste l ensemble de la couche OSI dans le simulateur de réseau NS-2 et ainsi évaluer l impact des mécanismes de fiabilisation de la couche liaison sur les performances de différents protocoles de transport. Nous avons montré que, pour l ensemble des variantes de TCP considérées, quand le débit au niveau transport est proche de la capacité de canal, utiliser ARQ au niveau liaison est optimal. Dans le cas où le taux d erreur au niveau de la couche physique est plus élevé, HARQ permet d obtenir un meilleur débit au niveau transport. Les dernières spécifications concernant la voie retour du lien satellite DVB-RCS2 présentent deux méthodes d accès (aléatoire et dédié) qui peuvent être implémentées pour permettre aux utilisateurs d accéder à Internet ou de transmettre des données. Nous avons développé un module pour NS-2, Physical Channel Access (PCA), qui modélise l accès au canal pour chacune de ces méthodes afin de comparer leur impact sur les performances de bout-en-bout. Nous avons mesuré que les méthodes d accès dédié permettent un débit plus important et les méthodes d accès aléatoire une transmission rapide des flots courts. Nous avons donc proposé de mixer ces méthodes d accès, en fonction de l évolution dynamique de la charge du réseau et de la taille du flot de données transmis. Finalement, nous avons étudié s il était possible d exploiter les données de la gateway du satellite pour introduire un trafic à priorité basse. Nous avons montré qu il était possible avec Low Extra Delay Background Transport (LEDBAT) comme protocole de la couche transport d introduire un trafic en tâche de fond. Cependant, en fonction de la variation de la charge du réseau, paramétrer correctement son mécanisme interne est nécessaire Notations Pour des raisons de clarté, nous définissions les termes qui vont être utilisés dans ce document: Flot: transmission d information au niveau de la couche transport; Datagramme: fragmentation au niveau de la couche réseau d un flot; Trame: ensemble de paquets d informations répartis sur un bloc temps fréquence entre les utilisateurs et la gateway satellite, générée tous les T F ; Link Layer Data Unit (LLDU ): N data octets d un datagramme; 3
22 Physical Layer Data Unit (PLDU ): LLDU avec N repair octets de redondance optionnels (N = N data + N repair ); Blocs: les PLDUs peuvent être répartis en N block blocs si la méthode d accès en a besoin; Timeslots: élément d une trame où un bloc peut être inséré Organisation La Section 1.2 présente CLIFT et TMT ainsi que nos résultats de mesures et interprétations dans le cadre de notre étude sur l impact des mécanismes de fiabilisation de la couche liaison sur les performances de TCP. Nous détaillons PCA et les mesures relatives aux méthodes d accès et performances de bout-en-bout dans le contexte de DVB-RCS2 en Section 1.3. La Section 1.4 rassemble les résultats de nos expérimentations sur le protocole LEDBAT afin d évaluer sa capacité à supporter du trafic en tâche de fond dans le contexte de liens satellitaires. Nous proposons une conclusion de l ensemble de ces travaux en Section G par satellite: mécanismes de retransmission de la couche liaison et TCP Dans le contexte des liens satellite pour la 4G et des canaux Land Mobile Satellite (LMS), les auteurs de [1, 2] mesurent les variations importantes du rapport signal-à-bruit du canal. Des longs paquets d erreurs au niveau de la couche physique résultent en perte massive de données utiles que les mécanismes de correction d erreur ne peuvent pas récupérer. L implémentation de codes correcteurs d erreur de la couche physique est souvent liée à un matériel spécifique, ce qui les rend peu modifiable une fois le produit déployé. Des mécanismes de fiabilisation peuvent être introduits au niveau de la couche liaison pour corriger les données non reconstruites par la couche physique. Ainsi, dans les systèmes cellulaires définis par LTE-Advanced, des mécanismes de type HARQ peuvent être introduits au niveau de la couche liaison [3, 4]. Il existe également de nombreux autres mécanismes qui ont été évalués en modélisant les erreurs pouvant se produire sur le canal de transmission [5, 6]. 4
23 L étude des interactions entre les protocoles des couches transport et liaison ont fait l objet de plusieurs études [7, 8, 9], toutefois, la spécificité du lien satellite avec un récepteur mobile (taux d erreur bit plus important) et les évaluations de performance avec les protocoles de contrôle de congestion les plus récents, amènent à penser que des travaux supplémentaires sont nécessaires. Les auteurs de [10] prouvent que, dans la plupart des cas, introduire ARQ au niveau de la couche liaison, permet d améliorer les performances d une version basique de TCP. Ce résultat a été vérifié en simulation NS-2 par [11], où les auteurs suggèrent qu un simple mécanisme de retransmission est suffisant, et que l introduction de redondance est trop coûteux en bande passante. Toutefois, les auteurs de [12] montrent un avantage inhérent au mécanisme HARQ quand il y a un lien satellite dans le réseau. Nous proposons donc des études supplémentaires sur l impact des mécanismes de la couche liaison avec une considération réaliste des performances des codes de la couche physique et du modèle du canal. Pour ce faire, nous avons développé Cross Layer In- Formation Tool [13, 14] (CLIFT) qui (1) implémente les mécanismes de fiabilisation de la couche liaison sur des traces issues de la couche physique à l aide d un autre outil développé Trace Manager Tool (TMT) et (2) charge ces traces de la couche liaison dans le simulateur de réseau NS-2. Ces résultats sont publés [15] Définition d un réseau 4G et des protocoles évalués Dans cette section, nous définissons les spécifications derrière l appellation 4G et les mécanismes de fiabilisation de la couche liaison dont les impacts sur différentes versions de TCP seront évalués par la suite Réseau 4G Le terme 4G désigne le standard pour la quatrième génération de communication mobile. Les applications considérées qui utilisent la 4G incluent l accès à Internet, la téléphonie sur IP, les jeux en lignes ou la télévision haute définition. Toutes ces applications nécessitent une important bande passante et une réactivité directe. En 2008, le International Telecommunications Union-Radio communications sector (ITU-R) a défini les standards pour définir ce qu est un réseau 4G, détaillés dans [16]. Les spécifications indiquent que, pour être qualifié de 4G, un accès doit garantir aux usagers rapides (voitures, trains) un 5
24 débit de 100 Mbps et aux usagers plus lents un débit de 1 Gbps. Il y a eu plusieurs problèmes quant à la détermination du terme 4G 2, tant des fournisseurs commercialisaient des services 4G sans pour cela répondre aux critères de la spécification. C est dans ce contexte que nous proposons d évaluer l impact des mécanismes de fiabilisation de la couche liaison sur les performances de TCP. Nous détaillons donc dans les prochaines sections les protocoles utilisés au niveau des couches liaison et transport Fiabilisation de la couche liaison Cette section rassemble les mécanismes de fiabilisation qui peuvent être implémentés au niveau de la couche liaison, la compréhension de leur fonctionnement permettant une meilleure appréhension de leur impact respectif sur les performances de TCP. Nous considérons des Link Layer Data Unit (LLDU ) comme unité de données de la couche liaison. Nous proposons de détailler le fonctionnement des mécanismes de fiabilisation de la couche liaison dont les impacts sur les protocoles de transport seront étudiés. Au niveau de la couche liaison des retransmissions et/ou de la redondance peuvent être introduits [17, 6], dont nous proposons la classification suivante: Forward Error Correction (FEC) Forward Error Correction est un schéma dans lequel l émetteur envoie une combinaison de données utiles et de redondance. Notons par N D (resp. N R ) le nombre de données utiles (resp. de redondance), et N = N D + N R. La récupération des N D blocs de données utiles est possible si au moins N D blocs ont été reçus. Si il y a plus de N R effacements, aucune correction n est possible. Avec ce schéma, il n y a aucune retransmission. Automatic Repeat request (ARQ) La famille des Automatic Repeat request peut être découpée en différentes sousfamilles: Stop-and-Wait ARQ, Go-Back-N ARQ ou Selective-Repeat ARQ. Nous considérons, dans ce document, uniquement le mécanisme SR-ARQ qui consiste en la retransmission des LLDUs qui ont été perdus lors de la transmission. Nous notons SR-ARQ par ARQ
25 Hybrid-Automatic Repeat request (HARQ) Hybrid-Automatic Repeat request mécanisme est une combinaison des mécanismes FEC et ARQ décrits ci-dessus. Après la transmission d un premier bloc FEC, avec des LLDU utiles et de redondance, HARQ autorise l émetteur à transmettre des LLDUs de redondance supplémentaires pour le cas où la récupération de l ensemble des LLDU utiles n a pas été possible du côté récepteur. Bien qu il existe un nombre important de mécanismes de fiabilisation, nous focalisons notre attention sur FEC, ARQ et HARQ, tant leur généricité nous suffit pour obtenir une mesure réaliste des impacts des mécanismes de fiabilisation de la couche liaison sur les performances de TCP (dont les versions choisies sont présentées dans la prochaine section) Fiabilisation au niveau transport Les fonctionnalités de références du protocole TCP sont définies dans la RFC de TCP NewReno [18]. TCP a été défini dans un premier temps dans un contexte où la couche physique récupère une très grande partie de l information. L introduction de canaux sans fil et long délais ont amené à effectuer des modifications dans le fonctionnement de base du protocole. Les différentes variantes du protocole TCP étudiées sont TCP NewReno [18], TCP Westwood [19], TCP Vegas [20], TCP Hybla [21], TCP CUBIC [22] et TCP Compound [23] Cross Layer InFormation Tool (CLIFT) Ne disposant pas de testbed nous permettant de mesurer l impact des mécanismes de la couche liaison sur ces versions de TCP, nous avons développé Cross Layer InFormation Tool qui repose sur des traces issues de la couche physique et NS-2. Nous présentons la logique de développement dans cette section Structure de CLIFT L idée générale de CLIFT est de déterminer les temps de transmission de datagrammes de TCP en fonction des caractéristiques des couches basses: CLIFT implémente les mécanismes de fiabilisation de la couche liaison sur des traces issues de la couche physique pour produire des traces équivalentes à la sortie de la couche liaison. Ensuite, CLIFT fragmente 7
26 les datagrammes de TCP sur ces traces et transmet chaque datagramme en fonction du temps nécessaire à sa recomposition au niveau de la couche liaison du récepteur. Des modèles de canaux existent dans différents domaines, toutefois, ils sont soit peu précis soit privés, ce qui limite leur utilisation dans notre contexte. Mesurer des traces est une manière facile de résoudre le problème d estimation des performances du canal couche physique et accroît le réalisme des résultats obtenus. CLIFT permet d échapper au problème de la modélisation du canal, qui peut être complexe dans le cas d utilisateurs mobiles de ressource satellite. CLIFT est composé des 3 modules suivants: Traces de la couche physique Ces traces contiennent une ligne pour chaque bloc de données de la couche physique, indiquant la date de réception, le nombre de bits utiles contenus dans ce bloc, et la récupération effective (ou non) du bloc de données; Trace Manager Tool (TMT) Pour chaque lien où CLIFT est introduit, TMT applique le mécanisme de fiabilisation choisi sur les traces de la couche physique afin de produire des traces de la couche liaison; Module NS-2 Ce module s occupe du chargement de la trace de la couche liaison afin de réguler la transmission des datagrammes de TCP en fonction des traces qui illustrent les évènements des couches basses Traces de la couche physique Les traces en entrée de CLIFT peuvent être mesurées, en utilisant par exemple des traces qui peuvent être obtenues sur CRAWDAD 3, ou générées par des émulateurs ou simulateurs de la couche physique [24, 25]. Chaque Physical Layer Data Unit (PLDU ) envoyé au niveau de la couche physique est caractérisé par une date de transmission et un temps de décodage. Les fichiers de traces de la couche physique acceptés par notre logiciel contiennent une ligne par PLDU, avec chacun de ces temps en colonnes
27 Le temps de décodage est composé de différents délais introduits par les mécanismes de fiabilisation de la couche liaison. On dénote: t i comme la date de transmission de LLDU i ; d i comme le temps de décodage de LLDU i. A t = RT T/2 + t i + d i, la couche physique délivre LLDU i à la couche liaison du récepteur, s il n y a pas de délai supplémentaire. Aussi, nous considérons qu un LLDU est effacé quand d i = Trace Manager Tool (TMT) En entrée, TMT accepte une liste de paramètres (un fichier texte contenant les caractéristiques du mécanisme de fiabilisation de la couche de liaison) et le fichier de trace de la couche physique. TMT produit donc un fichier de traces en sortie de la couche liaison dépendant du fichier de trace couche physique et du mécanisme de fiabilisation choisi. Seules les lignes utiles sont conservées. Les LLDUs de redondance ou les retransmissions n apparaissent pas dans le fichier de trace final et le délai de décodage des LLDUs effacés est adapté en fonction du LLDU qui a permis la récupération de ce LLDU utile. Le délai supplémentaire nécessaire à la reconstruction du LLDU utile, noté d i est le temps nécessaire à l obtention (t R ) et au décodage (d R ) de LLDU R, LLDU qui permet la récupération de LLDU i : d i = t R + d R t i. LLDU i va être décodé par le récepteur à t = RT T/2 + t i + d i. Afin de valider l implémentation de TMT, nous avons développé les expressions théoriques du débit efficace (rapport entre le nombre d informations reçues et le nombre d informations envoyées au total) et le délai de décodage pour chacun des mécanismes de fiabilisation FEC, ARQ et HARQ. Notons P (i, j) la probabilité de perdre i LLDU quand j sont envoyés, P R (i) la probabilité que i LLDU s soient reçus et P S (i) la probabilité que i LLDU s soient envoyés. Nous montrons la logique dans le développement de ces équations pour FEC ((1.1) et (1.2)) et HARQ (1.3). Ce modèle que nous proposons est publié [13] et plus d informations relatives au développement de ces équations sont disponibles dans la Section , page 81. η MEC = ND i=1 P R (i).i N D + N R (1.1) 9
28 d F EC = RT T 2 N 1 + p 1 P (i, N 1) N i=n R 2 T P (1.2) η HARQ = ND i=1 P R (i).i j=1 P S (j).j (1.3) Nous présentons la validation croisée des expressions théoriques et de l implémentation de TMT dans la Figure 1.1, qui illustre que pour un sous-ensemble de paramètres les courbes d évolutions du débit efficace et du délai de décodage obtenues par notre modèle et TMT coïncident pour différents niveaux de bruits du canal. Plus de détails relatifs à la validation croisée de notre modèle avec TMT sont présentés dans la Section , page Throughput efficiency (%) Recovery delay (s) FEC(10/12) Theory FEC(10/12) TMT ARQ Theory Probability to stay in bad state (a) Débit efficace ARQ TMT HARQ(5/7) Theory HARQ(5/7) TMT FEC(10/12) Theory FEC(10/12) TMT ARQ Theory Probability to stay in bad state (b) Délai de décodage Figure 1.1: Validation croisée de TMT et du modèle ARQ TMT HARQ(5/7) Theory HARQ(5/7) TMT Nous pouvons considérer que TMT, qui implémente les mécanismes de fiabilisation de la couche liaison sur des traces issues de la couche physique, est validé CLIFT et NS-2 Cette partie présente comment les datagrammes de la couche transport sont fragmentés en LLDU et leur temps de transmission adapté en fonction des traces de la couche liaison produites par TMT. A cette fin, nous introduisons une nouvelle file d attente dans NS-2. La Figure 1.2 illustre que la fonction enque() est appelée quand un datagramme arrive dans la file d attente. Quand le canal est libre (c est-à-dire il n y a pas un datagramme 10
29 en cours d émission), la fonction deque() est appelée pour transmettre le datagramme choisi en fonction de mécanismes de management de la file d attente. Nous adaptons l ordonnancement du buffer d émission et la date à laquelle la fonction deque() est appelée. Nous modélisons l attitude des couches physique et liaison en retardant la transmission des datagrammes et/ou introduisant des pertes. Node path path Node path Node path Node path Node NODE NETWORK NODE NETWORK Sending buffer enque ( ) deque() path Receiving buffer path Figure 1.2: Fonctions enque() et deque() dans NS-2 Un datagramme peut être segmenté en (m + 1) LLDU (LLDU n,...lldu n+m ). Nous notons E i comme la date de mise en file d attente pour émission de datagramme i, T i comme la date de transmission de ce datagramme et D i son temps de décodage. Nous déterminons à travers l utilisation de la trace de la couche liaison le LLDU tel que t n E i < t n+1. Parmi les m LLDUs, nous déterminons D i = max k [n,n+m] (t k + d k ) T i. En effet, max k [n,n+m] (t k +d k )+RT T/2 représente la date à laquelle datagramme i est reçu par le récepteur. Plus de détails concernant l implémentation du module NS-2 sont disponibles dans la Section 4.1.4, page Scénario Nous considérons trois cas d études qui correspondent chacun à des traces de la couche physiques différentes. Nous utilisons les simulateurs OFDM et TDM du Centre National 11
30 d Etudes Spatiales (CNES) 4, car ils considèrent les codes de la couche physique les plus récents, ainsi que des caractéristiques du lien satellite réalistes, comme les orbites du satellite [26]. distribution par satellite (référencé comme Diffusion scenario): (1) pour la voie descendante, nous considérons un Turbo Code 3GPP2 avec un mot de code (avant encodage) de 1523 octets; (2) pour la voie montante, nous considérons un Turbo Code 3GPP avec un mot de code (avant encodage) de 33 octets. La profondeur de l entrelacement canal au niveau de la couche physique est de 36 ms. Nous présentons les résultats pour ce scénario avec Es/N0 = 14 db, i.e. P ER < 10 2 ; trafic bi-directionnel Internet avec entrelacement (référencé comme Interleaved Internet scenario): nous considérons un Turbo Code 3GPP avec un mot de code (avant encodage) de 33 octets. La profondeur de l entrelacement canal au niveau de la couche physique est de 36 ms. Nous présentons les résultats de ce scénario avec Es/N0 [5; 8] db, i.e. P ER [10 2 ; 10 1 ]; trafic bi-directionnel Internet sans entrelacement (référencé comme Non Interleaved Internet scenario): nous considérons Turbo Code 3GPP avec un mot de code (avant encodage) de 33 octets. La profondeur de l entrelacement canal au niveau de la couche physique est de 0 ms. Nous présentons les résultats de ce scénario avec Es/N0 [5; 12] db, i.e. P ER [10 2 ; 10 1 ]; Résultats Pour l ensemble des simulations, nous considérons un véhicule mobile se déplaçant à 60 km/h. Nous représentons le débit utile moyen lors d une transmission durant 500 s Diffusion scenario Nous présentons les résultats pour le Diffusion scenario dans la Figure 1.3. Nous mesurons qu à partir d un taux d erreur bit inférieur à 10 2, TCP NewReno et TCP Compound n utilisent pas l ensemble de la bande passante en moyenne. TCP Westwood, CUBIC et TCP Hybla optimisent l utilisation de la ressource, comme ils ont été définis sans considérations de la couche liaison et pour augmenter rapidement la taille de la fenêtre de congestion 4 Plus de détails sur l organisation: 12
31 dans le cas de réduction de la taille de la fenêtre du à des pertes localisées. Nous mesurons que de la bande passante est utilisée pour transmettre de la redondance (inutile) au niveau de la liaison quand le mécanisme de fiabilisation de la couche liaison est de type HARQ. Nous proposons donc, dans ce cas, d introduire CUBIC, TCP Westwood ou TCP Hybla avec ARQ au niveau de la couche liaison, quand le taux d erreur paquet couche physique est inférieur à Throughput [Mbps] TCP NewReno CUBIC TCP Compound TCP Hybla TCP Westwood ARQ HARQ10/12 HARQ10/15 HARQ50/52 Figure 1.3: Débit de différentes versions de TCP en fonction du mécanisme au niveau de la couche liaison (voie montante) Interleaved Internet scenario Dans ce scénario, dont les résultats sont présentés pour TCP Hybla et CUBIC uniquement en Figure 1.4 (contrairement au cas précédent) le taux d erreur paquet couche physique est supérieur à Dans cette situation, nous observons que l introduction HARQ comme mécanisme de fiabilisation de la couche liaison permet un gain significatif du débit utile lors de la transmission d information entre un satellite et un véhicule mobile. 13
32 Throughput [Kbps] Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (a) TCP Hybla Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (b) CUBIC Figure 1.4: Impact de HARQ et ARQ quand Es/N0 décroît sur les performances de TCP Throughput [Kbps] Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (a) TCP Hybla Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (b) CUBIC Figure 1.5: Mesure des bénéfices de l entrelacement au niveau de la couche physique Non Interleaved Internet Scenario La Figure 1.5 montre les performances de CUBIC et TCP Hybla avec HARQ au niveau de la couche liaison. On mesure notamment qu un entrelacement de 36 ms correspond à un gain de 109 kbps pour un même taux de signal par rapport au bruit. Pour obtenir les mêmes performances en termes de débit qu un système avec CUBIC, HARQ et entrelacement, un système avec CUBIC, HARQ et sans entrelacement doit envoyer avec 2 db supplémentaire Discussion Quand de la redondance est transmise au niveau de la couche liaison, plus de bande passante utile est exploitée: malgré une réduction de fenêtre de congestion moins fréquente, si la capacité du canal peut être entièrement utilisée (i.e., si le protocole de transport est efficace, ou que le taux d erreur paquet couche physique est bas), transmettre de la redondance 14
33 n est pas une méthode efficace. Quand le taux d erreur bit est élevé (qui résulte en effacements plus fréquents au niveau de la couche liaison), nous observons que HARQ permet de meilleures performances de bout-en-bout que ARQ au niveau de la couche liaison. 1.3 Lien retour DVB-RCS2 : TCP et méthodes d accès Le projet de Digital Video Broadcasting (DVB) est un consortium établi pour définir des standards pour des services vidéos et de données. 5 Le service satellite de DVB est divisé entre le système de transmission (depuis les fournisseurs vers les utilisateurs) et le canal de contrôle (depuis les utilisateurs vers les fournisseurs). Les standards actuels pour les liens satellites sont DVB-S2 (canal de diffusion) et DVB-RCS (canal de retour par satellite). En 2011, DVB a validé les deux premières (de trois) spécifications pour la seconde génération de canal retour par satellite, DVB-RCS2. 6 [27] présente une description globale du système; [28] détaille les standards pour les couches basses du satellite; [29] présente les spécifications pour les couches hautes. Les spécifications pour les couches basses, [28], ont aussi été soumises à l European Telecommunication Standards Institute (ETSI ) pour une standardisation formelle. Cette version pour la transmission de données depuis les utilisateurs vers les gateway des satellites intègre des fonctionnalités nouvelles telles qu une sécurité accrue, une qualité de service plus importante et le support de IPv6. Les auteurs de [28, p. 126, sec ] conseillent que the ST shall by default not transmit in contention timeslots for traffic, but may do this when explicitly allowed by indication in the Lower Layer Service Descriptor or by other administrative means, faisant un usage des méthodes d accès aléatoires pour des procédures d identification [28, p. 182, sec ], et pour du trafic de manière optionnelle. De plus, dans [28, p. 42, sec ], les spécifications définissent que les Terminal Burst Time Plan Table version 2 (TBTP2 ), qui donnent aux utilisateurs les timeslots sur lesquels ils peuvent envoyer de l information, may be used to assign dedicated access timeslots, [... ] allocate timeslots for random access. Par conséquent, les spécifications présentent un intérêt potentiel pour l introduction de méthodes d accès aléatoire et dédié, sans fournir de plus amples détails d implémentation. 5 Détails supplémentaires sur 6 Source: PRESS RELEASE- DVB-RCS2 MEETS WITH APPROVAL disponible sur dvb.org/news_events/news/dvb-rcs2-published/index.xml 15
34 Toutefois, certaines parties du rapport supposent une préférence pour les méthodes à accès dédié [28, p. 126, sec ]. Avec l accès dédié, le canal est réservé pour un utilisateur particulier, ce qui permet d optimiser le taux de codage au niveau de la couche physique et d introduire une communication fiable. Toutefois, il faut compter 500 ms pour la réservation du canal. Avec les méthodes d accès aléatoire, le canal n est pas réservé, car tous les utilisateurs transmettent leurs données sur des timeslots sans connaissance au préalable de son utilisation par un autre utilisateur. La donnée peut être perdue et des retransmissions seront alors nécessaires. Ne pas avoir besoin d envoyer une requête, permet d économiser 500 ms sur l envoi des premiers paquets de données. Les auteurs de Why latency matters to mobile blackhaul 7 insistent que le délai est une métrique primordiale pour les utilisateurs. Pour argumenter leur propos, ils rassemblent diverses études, dont celle de Google qui a estimé qu un délai additionnel de 500 ms pour effectuer une recherche résulte en une diminution de 25 % du nombre de recherches effectuées par les utilisateurs. Ceci motive notre intérêt à comparer les performances de chaque méthode d accès au canal. Les études présentés dans [30, 31, 32] mesurent l impact d un accès dédié au canal sur les performances de TCP. Aussi, l analyse des mauvaises performances de TCP quand le délai d accès au canal varie fortement, illustré dans [30], montre que les méthodes d accès aléatoire ne sont pas une solution pour de longs transferts de fichiers, mais peuvent être intéressantes pour des flots courts [33, 34]. A notre connaissance, il n existe pas d études comparant les impacts de chacune de ces méthodes d accès au canal sur les performances de TCP, considérant les méthodes d accès aléatoire les plus récentes Méthodes d accès au canal de DVB-RCS2 La méthode d accès au canal définit la manière dont les données peuvent être transmises entre la gateway du satellite et les utilisateurs (terminaux satellite, ST): la ressource satellite est partagée entre les différents utilisateurs et les protocoles peuvent en allouer à chacun d entre eux. Pour ce faire, la ressource est distribuée tous les T F = 45 ms par le Network Control Center (NCC). Le NCC est un élément qui adapte la répartition des timeslots disponibles pour chaque trame, de manière à (1) accepter de nouvelles applications; (2) adapter la réservation des timeslots en fonction des caractéristiques de chaque utilisateur 7 Publié par O3b Networks et Sofrecom. Disponible à: mobile-backhaul 16
35 (en fonction des différentes priorités entre eux); (3) optimiser le code de la couche physique pour les accès dédiés [27]. Le NCC transmet un Burst Time Plan (BTP) aux utilisateurs pour leur indiquer quand et comment transmettre leurs données. Un bloc temps fréquence, une trame [28, p. 157, sec ], est alors transmis. Notons N S, le nombre de timeslots disponibles par fréquence. Les fréquences sur lesquelles les données sont transmises peuvent être divisées, en fonction de la méthode d accès: F R fréquences sont dédiées à l accès aléatoire et F D sont réservées pour l accès dédié. Au total, une trame peut transporter N S (F R + F D ) timeslots. Avec les méthodes d accès dédié, le terminal satellite transmet des données sur des timeslots réservés, i.e., il n y a pas d autres utilisateurs qui envoient des données sur ce timeslot. Ces méthodes permettent une transmission fiable et un code couche physique optimisé. Toutefois, il requiert une demande de réservation de capacité, qui dure 500 ms dans le contexte de DVB-RCS2. Chaque timeslot dure 1.09 ms et peut transporter 536 symboles. Dans la simulation, nous considérons un scénario ciel clair, i.e., le rapport de signal par rapport au bruit est de 8.6 db. Nous supposons qu un utilisateur applique un code de taux R = 2/3 avec une modulation 8PSK pour encoder 920 bits d information dans un mot de code de 1380 bits, i.e., 460 symboles. Avec les méthodes d accès aléatoire, le trafic venant de plusieurs terminaux satellite peut être transmis sur les mêmes timeslots, et la gateway ne peut pas identifier les différentes données. Un des principaux avantages des méthodes d accès est qu un utilisateur n a pas besoin d envoyer une requête de réservation de ressource. Pour corriger l information, N repair octets de redondance sont ajoutés aux N data octets utiles pour former un mot de code de N = N data + N repair octets qui sont répartis sur N block blocs. N ra timeslots forment un Random Access block (RA block) sur lequel des codes à effacement sont introduits [28, p. 126, sec ]. Parmi les différentes techniques, nous pouvons citer: Multi-Slots Coded ALOHA (MuSCA) [35], ALOHA [36], Diversity Slotted ALOHA [37], Contention Resolution Diversity Slotted ALOHA (CRDSA) [38]. Les performances des méthodes d accès aléatoire sont déterminées par la capacité du code à décoder N data octets utiles d un utilisateur en fonction du nombre d utilisateurs, tout en maximisant ce nombre d octets transmis. Dans les systèmes utilisant CRDSA, à 5 db, chaque utilisateur utilise un taux R CRDSA = 2/3, associé à une modulation QPSK pour encoder 613 bits (597 bits utiles et 16 bits d en-tête) pour former un mot de code de 920 bits, i.e. 460 symboles [38]. Nous considérons N block = 3 qui sont donc répartis sur 3 timeslots d un RA block. Avec 17
36 MuSCA, à 5 db, chaque utilisateur encode un paquet de 680 bits (594 bits utiles et 86 bits d en-tête) avec un Turbo Code de taux 1/4 associé à une modulation QPSK. Pour MuSCA, nous considérons également N block = 3. Nous comparons les performances de deux mécanismes utilisés pour les simulations dans la Figure Packet loss rate Number of packets sent on a RA block of 100 slots CRDSA MUSCA Figure 1.6: Taux d erreur paquet pour les méthodes d accès aléatoire à 5 db Les méthodes d accès aléatoires sont régulièrement améliorées, nous proposons de considérer CRDSA et MuSCA, tant nous nous intéressons principalement aux différences de performance entre les méthodes d accès aléatoire et déterministe. Bien que les performance de ces différentes méthodes soient bien connues, il n existe pas, à notre connaissance, de comparaison de performance pour un trafic de bout-en-bout, que permet le module présenté ci-dessous, Physical Channel Access, (PCA) Physical Channel Access (PCA), un module pour NS-2 Nous proposons de modéliser l accès au canal dans NS-2 en implémentant un module intitulé Physical Channel Access (PCA). Un datagramme reste dans la file d attente tant que sa transmission effective au niveau des couches basses n a pas été effectuée. La Figure 1.7 compare les appels à enque() et à deque() de DropTail et de PCA. En fonction de la méthode d accès introduite, une partie du datagramme peut être envoyée tous les T F et une transmission instantanée est effectuée au niveau 3 quand le paquet est entièrement transmis par les couches basses. 18
37 SENDERS ACCESS POINT SHARED LINK RECEIVERS DROPTAIL at T 1 ACCESS POINT BUFFER SHARED LINK P N P 3 P 2 P 1 f at T 2 : enque(p N+1 ) P 1 P 2 P N+1 P N P 3 P 2 P 1 T 1 T 2 T 3 at T 3 deque(p 1 ) deque(p 2 ) P N+1 PN P 3 PCA ACCESS POINT BUFFER SHARED LINK at T 1 P N P 3 P 2 P f at T 2 P N P 3 P 2 P at T 3 P N P 3 T 1 T 2 T 3 t deque(p 1 ) deque(p 2 ) Figure 1.7: Allocation de capacité: enque() et deque() 19
38 La fonction deque() émule le temps d arrivée d une nouvelle trame au récepteur. Elle est appelée tous les T F. Une boucle est effectuée sur l ensemble des datagrammes dans la file d attente pour trouver (1) ceux dont la trame actuelle envoie les derniers bits afin que ces datagrammes soient transmis au niveau de la couche réseau de NS-2 et (2) ceux pour lesquelles il faut déterminer la quantité qu il restera à envoyer après l envoi de la trame courante. Cette méthode permet donc de: distribuer de manière équitable la capacité pour les méthodes à accès déterministe; déterminer la probabilité d erreur pour les méthodes d accès aléatoire (car elle dépend de la charge du réseau); Des détails supplémentaires relatifs au développement de PCA sont disponibles dans la Section 5.1.2, page Dimensionnement de la trame Les dimensions de la trame sont: (F D + F R ) N S = = 4000 timeslots par trame. Comme une antenne ne peut pas envoyer et recevoir sur des fréquences différentes à un instant donné, le nombre maximal de timeslots utilisable par trame pour un utilisateur donné est de 40 avec un accès déterministe et un utilisateur transmettra un PLDU par timeslot. Quand l accès est aléatoire, il faut considérer le fait qu un PLDU peut être découpé sur N block = 3 bloc, réduisant donc à 13 le nombre de timeslots qu un utilisateur pourra alors exploiter Utilisation du canal avec les méthodes à accès dédié La capacité est répartie équitablement entre les utilisateurs dont l accès au canal est dédié. Donc s il y a N U utilisateurs, chacun pourra exploiter au maximum min((f D + F R ) N S /N U, N S ) = N S min((f D + F R )/N U, 1) timeslots. Quand N U (F D + F R ) (i.e., quand N U 100), une session TCP peut transmettre N S /N block N data = 40/1 920 = bits par trame et 40/3 594 = 7722 bits par trame avec MuSCA comme méthode d accès aléatoire. Il semble donc que les méthodes d accès aléatoire permettent donc aux sessions TCP de transmettre plus d information par trame. Toutefois, quand la charge du réseau augmente, (1) la quantité de timeslots disponibles pour chaque méthode d accès 20
39 déterministe diminue (tandis qu elle reste identique avec les méthodes d accès aléatoire) et (2) la probabilité avec les méthodes d accès aléatoire augmente. Nous considérons deux noeuds dans NS-2. Le premier, qui représente un ensemble de terminaux satellite, transmet un nombre variable de sessions TCP vers le second noeud. Nous mesurons dans la Figure 1.8 le nombre moyen de datagrammes qu une session TCP a pu envoyer en 20 s de simulation et dans la Figure 1.9 la probabilité d erreur de datagrammes en considérant le rapport entre les datagrammes perdus au niveau de la gateway du satellite et le nombre de datagrammes qui ont été reçus. Average number of datagram sent Number of FTP sessions Dedicated CRDSA MUSCA Figure 1.8: Nombre moyen de datagrammes transmis en 20 s Nous observons que lorsque la charge augmente, la chute du nombre de datagrammes transmis due aux erreurs des méthodes d accès aléatoire est plus forte que la chute du nombre de datagrammes transmis due à un partage équitable de la capacité avec les méthodes d accès déterministe. Pour les longues transmissions, les méthodes d accès déterministes sont donc préférables. Nous avions précédemment calculé que le nombre maximal de timeslots disponibles par trame pour chaque session TCP peut être défini par N S min((f D + F R )/N U, 1). Quand N U (F R + F D ), une session TCP peut transmettre (F R + F D ) N S /N U Ndata DE bits par trame avec un accès dédié, et N S /N block Ndata RA avec MuSCA. Pour qu une méthode d accès aléatoire soit viable, le nombre d utilisateurs, N U (F R + F D ), qui lui permet de transmettre plus de données, doit vérifier (1.4). 21
40 Datagram error probability Number of FTP sessions CRDSA MUSCA Figure 1.9: Nombre moyen de datagrammes perdus en 20 s (F R + F D ) N S /N U N DE data N S /N block N RA data N U (F R + F D ) N S /N U Ndata DE (1.4) N S /N block Ndata RA Avec nos paramètres de simulation, le minimum N U vérifie N U 477. Avec MuSCA, ces N U utilisateurs seraient répartis en 40 RA blocks. Il y aurait alors N U 13/40 = /40 = 155 utilisateurs par RA block, dont MuSCA ne peut pas garantir la transmission fiable, comme montré dans la Figure 1.6. A notre connaissance, il n existe pas de méthode d accès aléatoire qui permette de transmettre plus d informations qu une méthode d accès dédié. Le point N U de la Figure 1.10 n existe donc pas Temps de transmission et flots courts Si l on considère le fait qu il y a un délai d établissement de la connexion avec les accès dédiés et que la proportion de flots courts dans l Internet représente 80% du trafic (mesurée par [39, 40]), nous proposons de mesurer le temps de transmission pour les flots courts avec des méthodes d accès aléatoires. La Figure 1.11 illustre l évolution des segments TCP pour un flot donné. On peut visualiser le RTT et la progression de la fenêtre de congestion (CWND) dans la phase de 22
41 Number of datagrams sent in XX sec dedicated access random access N MaxRA Nu futurist random access? Number of Users Figure 1.10: Existence de N U slow-start. Nous pouvons observer que le temps nécessaire à la transmission de deux datagrammes est plus petit pour les méthodes d accès dédié (noté T 2) que pour les méthodes d accès aléatoire (noté T 1). Par conséquent, la progression de la fenêtre de congestion est plus rapide avec un accès dédié, mais commence plus tard. Sequence number RTT T1 T2 CWND Time [sec] Dedicated Access CRDSA MuSCA Figure 1.11: Réception des segments TCP Nous illustrons dans la Figure 1.12 le temps moyen nécessaire à la transmission d un certain nombre de datagrammes. Ainsi, à t = 1.5 s, 8 datagrammes sont reçus avec un accès dédié, contre 14 avec MuSCA. A t = 2.7 s, toutes les méthodes d accès ont permis 23
42 la transmission du même nombre de datagrammes, pour le nombre d utilisateurs de cette simulation. Comme la capacité est équitablement partagée entre les différents utilisateurs quand l accès est dédié, le temps de transmission augmente pour cet accès quand le nombre d utilisateurs augmente. Average time of reception [s] Sequence number of received datagram Dedicated Access CRDSA MuSCA Figure 1.12: Temps de réception moyenné Une solution potentielle: mixer les accès aléatoire et dédié Nous proposons donc une solution qui consiste en l introduction mixte des méthodes d accès dédié et aléatoire. Ainsi, les premiers paquets d une session de TCP seront transmis rapidement avec un accès aléatoire. Si le flot est long, les paquets suivants seront transmis sur un accès déterministe. Nous illustrons cette solution dans la Figure Cette idée a été également proposée par les standards de IP Over Satellite (IPOS) 8 et nous proposons donc une considération de cette fonctionnalité dans le service DVB-RCS Discussion Les dernières spécifications pour DVB-RCS2 présentent la possibilité d intégration des méthodes d accès dédié et aléatoire. Nous avons évalué l impact de chacune sur les performances de TCP en implémentant un module sous NS-2. 8 Référence TIA 1008-A - IP over Satellite (IPOS) 24
43 Reception time N users random access N users dedicated access SEQ=(f(N)) Figure 1.13: Switch entre méthodes d accès aléatoire et dédié Datagram sequence number Nous mesurons que la transmission de données est plus efficace avec une méthode d accès déterministe, car des collisions peuvent arriver avec les méthodes d accès aléatoire. Nous mesurons le gain en termes de temps de transmission des premiers datagrammes avec les méthodes d accès aléatoire. Nous proposons donc de mixer les deux méthodes pour bénéficier de l avantage de chacune de ces méthodes, et donc améliorer l expérience d un utilisateur. 1.4 Introduction d un trafic non perturbateur sur lien satellite avec le contrôle de congestion Low Extra Delay Background Transport (LEDBAT) Un intérêt nouveau pour l exploration des performances des protocoles Less-than-Best Efforts (LBE) a pu être observé au sein de l Internet Engineering Task Force (IETF) [41]. L idée est d introduire des protocoles de transport à faible priorité pour les trafics dont le délai de transmission n est pas une métrique primordiale, comme l envoi de courriels. En effet, les auteurs de [42] proposent d introduire des accès à Internet gratuit via l utilisation de services LBE. De nombreux protocoles de transport ont été proposés comme candidats potentiels 25
44 pour introduire du trafic LBE (présentés dans la RFC [41]), comme TCP Vegas [20]. Nous proposons d évaluer la capacité qu a Low Extra Delay Background Transport (LEDBAT) 9 pour la transmission du trafic LBE sur lien à fort produit bande passante délai. Nous avons publié les résultats qui sont présentés dans cette section [43] Le protocole LEDBAT Avant de présenter nos résultats de mesures, nous proposons de détailler le fonctionnement de LEDBAT. Ce protocole est caractérisé par les paramètres suivants: target queing delay (τ), impact de la variation de délai (γ = 1/τ), le One-Way Delay (D min ) minimum et le Own-Way-Delay actuel (D ack ). A la réception d un nouvel acquittement, la nouvelle fenêtre de congestion(cwnd) est adaptée suivant (1.5). cwnd = cwnd + γ(τ (D ack D min )) cwnd (1.5) Le contrôle de congestion de LEDBAT est basé sur les variations du délai de file d attente (i.e., la délai de file d attente est une notification primaire de congestion), estimé par (D ack D min ). Le target queuing delay, τ, représente le délai supplémentaire de file d attente que LEDBAT s autorise à introduire. Les auteurs de [44] décrivent les motivations ayant abouti au développement de LEB- DAT et conduisent les premières évaluations de performance de cet algorithme. Les auteurs de [45] proposent un modèle analytique pour LEDBAT afin de mener de larges simulations sur l impact des paramètres internes au protocole. A partir de ces résultats, la RFC de LEDBAT [46] précise que γ doit être fixé à 1/τ ou moins, et que τ doit être inférieur à 100 ms LEDBAT pour transmission sur lien 4G satellite Nous mesurons la capacité de LEDBAT à exploiter la ressource laissée libre par un flot principal, dans le contexte d une communication entre un véhicule mobile et un satellite. L introduction du flot LEDBAT ne doit pas perturber le flot principal. La simulation dure 450 s. Le flot principal (CUBIC) transmet de l information entre 0 s et 225 s, et entre 270 s et 450 s. Entre s et s, des données sont transmises avec LEDBAT. Nous 9 LEDBAT a été récemment standardisé à l IETF 26
45 utilisons CUBIC comme protocole de transport pour le flot principal car il est implémenté dans Android et le noyau de Linux. Table 1.1: LEDBAT sur 4G Satellite τ Protocole Débit à différents moments (kbps) ms CUBIC LEDBAT ms CUBIC LEDBAT ms CUBIC LEDBAT ms CUBIC LEDBAT Les résultats présentés dans la Table 1.1 illustrent que, dans certains cas, LEDBAT peut être un bon candidat pour supporter du trafic LBE, utilisant de la capacité sans perturber le flot CUBIC. Toutefois, ces résultats montrent que la valeur de la target value a un impact que nous proposons d évaluer avec des simulations supplémentaires dans d autres contextes LEDBAT sur réseau long délai chargé Nous considérons une architecture de réseau de type dumbbell où quatre groupes d applications FTP transmettent des données aux récepteurs à des moments différents, plus de détails sont disponibles dans les Table 1.2 et Figure Nous présentons le débit utile mesuré dans chacun des cas de simulations dans les Figures 1.15 (quand il y a uniquement les groupes B et C qui transmettent) et 1.16 (quand l ensemble des groupes transmettent). L introduction de flots dans le réseau accroît l utilisation du lien central, et nous focalisons notre attention sur la capacité de LEDBAT à (1) introduire un trafic LBE et (2) avoir le plus faible impact possible sur les performances du flot principal. 27
46 Table 1.2: Génération du trafic Émetteur A1,... AX, B1,... BY, C1,... CZ Groupe Nb Flots Date de transmission (s) Groupe A 100 [0;300] Groupe B 100 [0;30],[60;90],[180;210],[240;270] Groupe C 100 [0;75],[150;225] Groupe LEDBAT [1;10;25;50] [0;90] Group A Transmitter A_1 Transmitter A_X Group B Transmitter B_1 Transmitter B_Y Group C Transmitter C_1 LINK 1 LINK 2 Satellite Gateway Transmitter Ledbat Receiver 1 Receiver 2 Transmitter C_Z Transmitter Ledbat L Figure 1.14: Architecture du réseau La Figure 1.15 illustre que, quand le target queuing delay augmente et que le nombre de flots reste fixe, (1) les flots LEDBAT utilisent moins de ressources pour transmettre des données, (2) l utilisation de la capacité par les flots CUBIC diminue. Quand le target queuing delay évolue de 100 ms à 5 ms et que le nombre de flots LEDBAT est fixé à 50: (1) la capacité utilisée par les flots LEDBAT augmente de 5 %; (2) la capacité utilisée par les flots CUBIC augmente de 2 %. Par conséquent, le fait de considérer à la fois 50 flots LEDBAT avec la configuration de cette simulation et de modifier la target value de 100 ms à 5 ms permet une augmentation de l utilisation globale du lien de 7 %. 28
47 LEDBAT flows Used capacity (%) CUBIC flows Used capacity (%) Cumulative Used capacity (%) Value of the target (ms) C=5Mb/ 0 Ledbat C=5Mb/ 1 Ledbat C=5Mb/ 10 Ledbat C=5Mb/ 25 Ledbat C=5Mb/ 50 Ledbat Figure 1.15: Partage de la capacité sans le Groupe A (charge du réseau peu élevée) La Figure 1.16 illustre que quand le target queuing delay augmente de 5 ms à 100 ms, et que le nombre de flots LEDBAT reste fixé à 50, la capacité utilisée par les flots LEDBAT augmente de 5 %, mais la capacité des flots CUBIC décroît de 5 % également. La congestion introduite par l introduction des flots LEDBAT se répercute directement sur la capacité disponible pour les flots CUBIC. Cet impact est moins important quand la valeur de la target value est faible Discussion Bien que le protocole LEDBAT ait été développé pour permettre l introduction d un trafic LBE, nous mesurons qu un de ces paramètres internes a un impact non négligeable sur l agressivité du protocole. Nous illustrons qu un compromis doit être trouvé entre (1) perturber les flots principaux, (2) introduire un trafic LBE, et (3) augmenter l utilisation 29
48 LEDBAT flows Used capacity (%) CUBIC flows Used capacity (%) Cumulative Used capacity (%) Value of the target (ms) C=5Mb/ 0 Ledbat C=5Mb/ 1 Ledbat C=5Mb/ 10 Ledbat C=5Mb/ 25 Ledbat C=5Mb/ 50 Ledbat Figure 1.16: Partage de la capacité avec le Groupe A (charge du réseau élevée) globale du lien. Quand le réseau est fortement chargé, LEDBAT est moins agressif quand la target value est faible. Dans nos simulations, LEDBAT montre une faible équité quand la target value est supérieure à 5 ms. Toutefois, quand la charge et plus faible, fixer ce paramètre à 5 ms permet toujours d optimiser la transmission du trafic LBE et la capacité totale utilisée. Il est important de noter que la RFC de LEDBAT propose une target value à 100 ms. Les résultats présentés ici nous permettent de conclure que ce paramètre ne doit pas être supérieur à 5 ms. Nous ne présentons pas de modifications majeures sur l algorithme de LEDBAT, mais nous mesurons qu il existe des cas dans lesquels la paramétrisation par défaut ne convient pas. Par conséquent, nous pensons que LEDBAT doit utiliser des données du réseau pour adapter ses paramètres internes ou son mécanisme. 30
49 1.5 Conclusion Parmi l ensemble des demandes des utilisateurs, un faible délai de transmission est primordial. Afin d aborder ce problème dans le contexte de liens à long délai, ce document présente des résultats qui tendent à aider les fournisseurs à mieux intégrer des fonctionnalités sur des architectures de réseau existantes. Nous avons mesuré l impact des retransmissions au niveau de la couche liaison sur différents protocoles de contrôle de congestion pour des liens satellite fournissant la 4G. Pour le lien retour par satellite DVB-RCS2, nous comparons deux méthodes d accès à la ressource pour appréhender la qualité de service observée par les utilisateurs de ce service pour aller sur Internet ou transmettre un courriel. Enfin, nous explorons la possibilité d introduire un service pour la transmission d information en tâche de fond avec Low Extra Delay Background Transport (LEDBAT) afin d exploiter de la ressource satellite non utilisée. La poursuite des études entamées lors de cette thèse nous amène à évaluer la possibilité d une adaptation dynamique des paramètres internes de l algorithme de LEDBAT. Afin d apporter plus de résultats quant à la paramétrisation de LEDBAT qui lui permettrait d avoir un comportement LBE dans un contexte long délai, une thèse a débuté en 2012 au sein de l ISAE. Avec l aide d Emmanuel Lochin, j ai pu intervenir dans les études préliminaires de dimensionnement et premiers éléments de réflexion de ce doctorat. En plus des travaux présentés dans ce document, nous avons effectué des simulations pour participer aux investigations actuelles sur le problème du BufferBloat [47]. A l origine de ce phénomène, il y a un buffering excessif, i.e., une trop grande file d attente au niveau de la couche réseau par exemple, qui résulte en une importante latence du réseau et une réduction du débit efficace de bout-en-bout. Cet important délai de file d attente a notamment été mesuré sur les bornes d accès d utilisateurs, et une solution proposée à ce jour est l introduction de CoDel [48] qui est un mécanisme de management de file d attente (Active Queue Management, AQM ) dont l algorithme repose sur des rejets de paquets dont le temps passé dans la file d attente dépasse 5 ms. Afin de mieux appréhender le problème, nous avons effectué des études sur l impact de différents AQM sur les performances bout-en-bout en fonction du protocole de transport employé. Nous avons notamment confirmé les résultats de [49], qui précisent que l impact du délai introduit par la couche MAC ne peut pas être négligé lors de la paramétrisation de CoDel. L expérience acquise 31
50 dans l évaluation des performances inter-couches vont permettre une compréhension du problème: la proposition d une solution générique nécessite une considération des délais introduits par les couches basses. Nous pouvons faire une analogie entre ces délais et les délais d accès au canal avec DVB-RCS2, ou le délai de récupération de l information dans le cadre de l étude 4G. Aussi, une solution peut considérer les délais de mise en file d attente des noeuds intermédiaires, dont l approche serait alors similaire au protocole de contrôle de congestion LEDBAT. 32
51 33
52 34
53 Chapter 2 Introduction 35
54 2.1 Context Protocols have been adapted and infrastructures have been deployed to overcome the challenges provided by mobile access and high claim for video streaming. Such applications are delay sensitive and the increasing demand legitimates extensive studies evaluating end-toend transmission delays. On top of physical transmission delays, accessing a resource or recovering data from lower layers introduce delay and should not be neglected. Recovery schemes or channel access strategies can be introduced and variously impact on end-to-end delays. The OSI protocol stack has been vastly discussed, but the fact that the Internet access is constantly evolving imposes that adjustements are directly implemented to dynamically adapt the network to the needs of the end users. We believe that other fragmentation strategies would not be deployed in the next years, at least for economical reasons. Improvements are required on each of the existing layers of the OSI model or interactions between them to provide large traffic to mobile users. As an example, when Wi-Fi communications were deployed, issues in the protocols developed at the transport layer were detected: datagrams errors, which were used as congestion information, occur more often on those links; transport layers protocols had to be adapted. Also, redundancy at the link layer was explored to recover the data before transport protocols detect false congestion information. Some interesting Cross-layer solutions have been proposed to optimize those issues. If satellites take part in the network, running simulations considering the layers of the protocol stack is complex. This document argues that cross layer impacts should be investigated to study the impact of low layer schemes on recent transport layer protocols on long delay paths. Understanding the impacts of low layers protocols on the end-to-end transmission will enable better dimensioning of the network and adapt the traffic carried on. 2.2 Contribution and organization This document presents cross-layer studies for three long delay links contexts: 4G, DVB- RCS2 and less-than-best-efforts. Chapter 3 presents the OSI model to describe the different protocols used along this document. Also, this chapter justifies the need for cross-layer 36
55 studies, by gathering specifications and recent publications in each context Chapter 4: Link layer reliability schemes and TCP for 4G In this chapter, we study the impact of reliability mechanisms introduced at the link layer on the performance of transport protocols in the context of 4G satellite links. The implementation of physical layer schemes is commonly linked to specific hardware, making it ill suited to modifications after the design or deployment of the system. Reliability schemes can be introduced at the link layer in order to recover data that the physical layer may not be able to rebuild. Further investigation of impact of link layer schemes on the performance of transport protocols in realistic conditions that can only be achieved by considering real physical layer performance is needed. Specifically, we design a software module that performs realistic analysis of the network performance. The software module is composed by two main components: Trace Manager Tool (TMT) [13] that, based on physical layer traces, produces equivalent link layer traces; as a function of the chosen link layer reliability mechanism; and Cross-Layer InFormation Tool (CLIFT) [14] that loads the link layer traces in the NS-2 network simulator. The results presented in this chapter are under review for publication in International Journal of Satellite Communications and Networking [15] Chapter 5: Channel access methods and TCP for DVB-RCS2 This chapter presents the study on the impact of channel access schemes on the performance of transport protocol in the context of DVB-RCS2. The latest specifications of DVB-RCS2 which are under validation do not statue if the satellite gateway should introduce a random or dedicated channel access method to distribute the capacity among the different home users. There is a need for experiments and interpretations of the difference between those methods and their impact on the end-to-end performance. We develop Physical Channel Access (PCA) a module in NS-2 [50] that considers random and dedicated access schemes to evaluate the end-to-end performance of TCP sessions. 37
56 We highlight an interest for random access methods in [51]. The results presented in this chapter are under review for submission as a journal paper [52] Chapter 6: Exploit queuing delays to introduce less-than-best-effort traffic on satellite path This chapter provides an analysis of the performance of Low Extra Delay Background Transport (LEDBAT), as a legitimate Less-than-Best Effort (LBE) method for background applications in the context of congested large bandwidth delay product (LBDP) networks. IETF recently published a RFC for LEDBAT, as a congestion control algorithm for LBE transmissions. The rationale is to explore the possibility to grab the unused 4G satellite links capacity with LEDBAT in order to carry non-commercial traffic. We show that depending on the fluctuation of the load, performance improvements could be obtained by properly setting internal parameters of LEDBAT. We generalize this evaluation over different congested LBDP networks and confirm that the target value might need to be adjusted to networks and traffic s characteristics [43]. 38
57 39
58 40
59 Chapter 3 State of the art In this chapter, we briefly present the OSI model and protocols that can be implemented in both link and transport layers in Section 3.1. We compare the differences between existing satellite testbeds and network simulators in Section 3.2. Section 3.3 justifies the need for studying the impact of link layer reliability schemes on the performance of TCP for 4G networks. We present the available tools and studies to illustrate how we contribute in this area. Section 3.4 presents the DVB network and latest specifications and highlights why simulations on the impact of channel access methods on the performance of TCP are of interest. Section 3.5 discusses the possibilities to introduce background traffic on satellite links with Low Extra Delay Background Transport (LEDBAT) as transport layer congestion control. 41
60 3.1 Protocol stack In this section, we present the OSI model of the layered protocol stack in order to present notations and clarify the role of each layer. We present the way the fragmentation between layers is implemented. We finally detail the reliability schemes that can be introduced at both physical and link layers, and the transport layer protocols which performance are investigated OSI Model of a protocol stack In Table 3.1, we present the OSI Model of a protocol stack. This enables us to define notations and clarify the task of each layer. Each layer is dependant upon the layers below it. Table 3.1: OSI Model Layer Data unit notation Task Media layers 1- Physical Physical Layer Data Unit modulation, error correction (PLDU) effective transmission 2- Link Link Layer Data Unit framing, media access control (LLDU) erasure correction 3- Network datagram data routing End-to-end layers 4- Transport segments end-to-end reliability congestion avoidance 5- Session data managing sessions between applications 6- Presentation data conversion into machine independent data 7- Application data network process to application In order to better present the interaction between the different layers, we illustrate, at the sender side, the fragmentation of packets from upper to lower layers and, at the receiver side, the recovery time needed to forward packets from lower to upper layers in Figure
61 Fragmentation If no cross-layer scheme is introduced, the amount of data coming from the upper layer is considered useful data for lower layers: regardless of the content, this layer might as well introduce headers, trailers or coding after having split the upper layer packet. Recovery time At the receiver side, in order to forward the data to the upper layer, the recovery time is the time needed by the lower layer to receive the whole packets that enable it to rebuild the upper layer packet. As a result, this time is linked to the algorithm or reliability scheme introduced. SENDER SIDE RECEIVER SIDE Layer N Layer N Data Unit Layer N 1 At t=t Layer N 1 FRAGMENTATION Layer N Data Unit size At t=t At t=t ALGORITHM (headers, tailers, coding...) Layer N 1 header Layer N 1 tailer Layer N Recovery time=t2 t0 Layer N 1 Data Unit size Layer N 1 Data Unit Figure 3.1: Upper and lower layers: illustration of fragmentation and recovery time In the rest of this section, we provide more details on codes and notations, used at physical, link and transport layers, that are considered in the rest of this document. 43
62 3.1.2 Error-control codes We illustrated in the previous section that, at a given layer, clients and servers might introduce a scheme in order to ensure reliability. To this end, servers compute error-control codes, such as retransmissions or redundancy packets, and clients decode. Various reliability schemes have been defined and can be implemented at different layers. This document focuses on recovery mechanisms implemented at the link layer to study their impact on transport layer protocols. We define the commonly used mechanisms and notations in this section. Let I = {A, B, C, D} be a subset of data unit at a given layer (e.g., symbols at the physical layer, PLDU or LLDU... ). An error is defined by a change of information after transmission in the channel: as an example, there is an error event if the transmitter sends A I and the receiver gets B, C or D. An erasure is defined by a supplementary state, e I, which illustrates that a low layer could not rebuild the packet for the higher layer: as an example, there is an erasure event if the transmitter sends A I and the receiver gets e. Therefore, in the determination of the performance of a coding scheme, it is important to notice that error events has to be detected (the receiver has no prior knowledge on the presence of errors), whereas the receiver is directly aware of erasure events. Within the different techniques to recover data from error and erasure events [17, 6], the following link layer reliability schemes deserve highlights, because of their use in further chapters. At the link layer, retransmissions and erasure codes might recover erased LLDU, and so rebuild network layer datagrams. To do so, the mainly used codes are: Forward Error Correction (FEC) Forward Error Correction is a scheme where the sender transmits a combination of data and repair LLDUs. Let N D (resp. N R ) be the number of data (resp. repair) LLDUs and N = N D +N R. The process to recover data LLDUs is successful if at least N D LLDUs are received, otherwise (if the number of erasures is strictly greater than N R ) no correction is possible. The FEC scheme does not enable the retransmission of LLDUs. Automatic Repeat request (ARQ) Automatic Repeat request family can be defined by a subset of retransmission strategies (Stop-and-Wait ARQ, Go-Back-N ARQ or Selective-Repeat ARQ). We consider here SR-ARQ mechanism at the link layer level: it consists in the retransmission 44
63 SENDER SIDE RECEIVER SIDE Network layer Datagram Lost datagram Datagram transmitted Datagram transmitted if no erased retransmission Datagram transmitted if no erased retransmission Link layer No scheme FEC(5/7) ARQ HARQ(5/6) Link Layer Data Unit (LLDU) & 7 enable to rebuild the datagram Request for retransmission LLDU nb 1 LLDU nb Request for transmission of LLDU nb Data LLDU Repair LLDU Erased LLDU No scheme FEC(5/7) ARQ HARQ(5/6) CHANNEL ERASURE : LLDU nb 1 and LLDU nb 3 Figure 3.2: One example of FEC, ARQ and HARQ of the LLDUs that have been lost during the transmission. We denote SR-ARQ by ARQ. Hybrid-Automatic Repeat request (HARQ) Hybrid-Automatic Repeat request mechanism is a combination of the FEC and ARQ mechanisms previously described: after the first transmission of a FEC block, including data and repair LLDUs, HARQ allows the sender to transmit additional repair LLDUs when a recovery is not possible at the receiver side. In other words, if no correction is possible, the transmission of additional repair LLDUs is requested by the receiver. We denote by FEC(N D,N) (resp. HARQ(N D,N)), when FEC (resp. HARQ) forms a block with N D data LLDUs and N R = N N D repair LLDUs. Based on an example, Figure
64 illustrate the behaviour of FEC(5/7), ARQ and HARQ(5/6) when the same erasure events occur at the link layer on one network layer datagram. Moreover, at the physical layer, the codes introduce must detect and correct bit-error events. Physical layer codes also exploit redundancy and retransmission strategies. Among the different existing codes, Low Density Parity Check (LDPC) [53] and Turbo Codes [54] are nowadays competing against each other to be implemented on noisy channels. These codes are directly exploited in further chapters Router buffering, queue management and congestion When one datagram is transmitted between node A and node B, it is queued in the network layer sending buffer of A, forwarded to the link layer of A (which will introduce headers and tailers, reliability, as explained in previous sections), transmitted on the channel, received by the link layer of B, rebuilt and forwarded to the network layer receiving buffer of node B. As we illustrate in Figure 3.3, buffers at the network layer have a limited size (denoted X in the figure, i.e. there can be a maximum of X datagrams in the buffer). buffer size = X. buffer size = X. at t0 x at t0 at t1 at t1 at t2 queue overflow at t1 queing time = t1 t0 Figure 3.3: Queuing time and buffer overflow The queuing time of a datagram is the amount of time it stays in the buffer before being forwarded to the link layer. When the buffer is full, the datagram can not be stored in the buffer and is dropped. 46
65 When drop events occur, or when the queuing delay increases, transport layer protocols adjust their congestion window in order to prevent the collapse of the network. Next section details how these protocols deal with those metrics to reduce the amount of data they transmit in the network Transport layer congestion control protocols In this section, even though that may be mainstream for an experimented reader, we present the main functions of transport layer congestion control protocols and mainly implemented protocols. We clarify the key functions and differences between the protocols that we propose to investigate. Studies on the impact of low layers on the performance of transport layer protocols require tight understanding of their algorithms. The transport layer congestion control protocols (Transmission Control Protocol, TCP) aim to establish a reliable end-to-end connection and prevents the introduction of congestion. The rationale of the protocols implemented at this layer is to optimize the ratio between introducing congestion and transmitting data the faster the possible from a server to a client. TCP New Reno [18] extends from the out-of-date TCP Tahoe and TCP Reno, both detailed in [55]. The algorithm of this protocol defines the base line for TCP congestion controls and consists in the following phases: Connection establishment TCP uses a three-way handshake: (1) SYN: the client sends a SYN to the server, (2) SYN-ACK: the server acknowledges the client s SYN by sending ACK and sends SYN to the client, (3) ACK: the client acknowledges the server s SYN. Acknowledgements and sending buffer One important feature of TCP is to ensure reliable transmission. Each segment transmitted by the server is copied in a sending buffer and can be identified by a unique sequence number. The server (1) removes the copied acknowledged segments from the sending buffer when acknowledgement information from the client is received, (2) retransmits the segments that the client did not acknowledged (no acknowledgement information after the expiration of a timer, or loss information). Among the different technologies that can be used as acknowledgement information, the client may transmit the highest sequence number of in-sequence received segments [55] or 47
66 a Selective-ACKnowledgement (SACK) [56] vector which contains acknowledgement information for each segment. Congestion Window (CWND) The flow control tool of TCP that tempts to avoid congestion is the congestion window: it is the number of segments in flight during one Round-Trip delay Time (RTT). Its evolution is based on acknowledgement information: its size is computed at the reception of ACK/SACK and the segments are transmitted straight after. Slow start (SS) and congestion avoidance (CA) The slow-start threshold (ssthresh) determines whether slow start or congestion avoidance algorithm is called. Slow-start (when CWND < ssthreshold) and congestion avoidance (when CWND ssthreshold) algorithms modify the slow-start threshold value and the congestion window size as detailed in [55]. Fast Retransmit (FR) Fast Retransmit algorithm is introduced to detect and repair loss. Lost segments are prioritized and transmitted in the next congestion window to avoid overflowing the buffer of the client. This algorithm also halves ssthreshold and sets the congestion window to ssthreshold. TCP has firstly been designed for wire links and not for high-error wireless channels and high delay paths. Loss-based and delay-based TCP versions focus on congestion avoidance and fast retransmit algorithms to overcome these problems. We present in Table 3.2 their different approaches and details of TCP New Reno [18], TCP Westwood [19], TCP Vegas [20], TCP Hybla [21], TCP Cubic [22] and TCP Compound [23]. For each protocol, we focus on congestion avoidance and fast retransmit phases, even if their slow-start and time-out algorithm slightly differ. 48
67 TCP version Delay Loss Congestion avoidance Fast retransmit Comments based based (Server receives ACK, (Loss event) or timer-based) TCP New Reno CWND+ = 1/CWND ssthresh/ = 2 Base line of TCP CWND/ = 2 TCP Westwood CWND+ = 1/CWND ssthresh = AB Extension of FR Estimation of the CWND = AB of TCP New Reno Available Bandwidth (AB) TCP Vegas CWND+ = ±1/RTT None - if loss, Basic delay-based revert to New Reno TCP TCP Hybla ρ = RTT/RTTref ssthresh/ = 2 Designed for CWND = f(ρ, RTT, ssthresh) CWND/ = 2 satellite links TCP Cubic Wmax window size ssthresh/ = 2 Implemented in before last reduction CWND = 0.8 GNU/Linux CWND = Wmax ± f(wmax) 3 - Fast progression to Wmax, when CWND << Wmax - Slow approach around Wmax, when CWND Wmax - Fast progression before congestion, CWND > Wmax TCP Compound CWND = CWND1 + CWND2 Implemented in CWND1 (loss-based) CWND1 = (1 β) Windows Systems TCP New Reno CWND2 (delay-based) CWND2 = max(c1 c2, 0) TCP Vegas c1 = (1 β) CWND1 c2 = CWND1/2) Table 3.2: Different versions of TCP 49
68 Recent studies propose a number of modifications of the TCP protocol (mainly in Congestion Avoidance and Fast Retransmit phases), to improve the TCP performance in mobile or long-delay links. Proportional Rate Reduction (PRR) PRR [57] is an extension of Fast Retransmit algorithm that enables TCP to recover from loss quickly by reducing the congestion window less often. Adaptive end-to-end Congestion control Protocol (ACP) ACP [58] is a delay based congestion control for high bandwidth-delay product networks where the congestion window size is based on queue size estimations and a measure of fair sharing. Initial congestion window For instance, Cubic s initial congestion window is set to 2 datagrams, however the authors of [59] measure that increasing this parameter up to 10 datagrams improves web browsing experience (less latency) without introducing congestion. New transport layer protocols are proposed everyday to overcome specific problems, we just present here some interesting extensions. We focus on protocols that are currently implemented (presented in Table 3.2) in our evaluation of performance On the need for cross-layer simulations in the context of high BDP paths In Why latency matters to mobile blackhaul 1, the authors highlight that even though new mobile technology generations reduce the latency, each component of the network adds delay and may severely impact on the end user experience. As an example, on top of the transmission delay of a satellite link (254 ms), the various delays that are added along the path bring the one way delay to 329 ms. Badly set parameters on one of the links that is used on the end-to-end path may severely impact on the transmission delay and reduce the quality of user experience. As a result, in order to resolve those issues, cross layer impact should be investigated. 1 Published by O3b Networks and Sofrecom. Available at: mobile-backhaul. O3b Networks is a global satellite service provider and Sofrecom, a France Telecom Orange Group Company, is a world leader in telecommunications consulting and engineering. 50
69 In Section 3.2, we explain the problems in experimenting satellite path and justify the choice towards network simulation. We present the need for studies on the link layer reliability schemes impact on the performance of TCP in the context of 4G in Section 3.3 and the evaluations on the impact of access methods on the performance of TCP in the context of DVB-RCS2 in Section 3.4. Section 3.5 highlights the possibility to exploit satellite gateway information to enable low than best effort traffic. 3.2 Evaluation of the performance of satellite networks In this section, a comparison between satellite testbeds and network simulation tools is introduced in order to justify why we run simulations in NS-2 in the rest of this document Satellite networks and testbeds The satellite links are characterized by an high bit-error probability (i.e., the bit-error ratio (BER) can be up to 10% [1, 60]) and long delays (i.e., the standard one-way delay for satellite communication (in geosynchronous orbits) is around 250 ms). Several satellite testbeds have been set up to experiment the performance of protocols at different layers. Among the existing testbeds, we can quote OpenSand 2 and Satellite Networking Testbed 3 which both enable to measure the performance of end-to-end protocols with real implementations. Their use is of interests to validate existing protocols and evaluate realistic QoS experiences. However they might not be suitable for futuristic and exotic approaches and even though these tools are compulsory for validation processes and easy to use, they can not replace the necessity of having cheap and fast simulation tools in early ages of projects. 2 See: 3 See: 51
70 3.2.2 Network simulators Surveys on existing network simulation tools [61, 62] conclude that OPNET 4, OMNeT++ 5, NS-2 [63] 6 and NS-3 7 are well suited to lead simulations on various networks. OPNET is expensive to use and one user can not modify the source code, whereas OMNeT++, NS-2 and NS-3 are open-source solutions which can locally be extended for exotic simulations. Moreover, the authors of [64] compare the performance of existing open-source network simulators and confirm that OMNet++, NS-2 and NS-3 are viable solutions. NS-2 is widely spread in the researchers community and contain a large number of features which make this choice suitable to lead network simulations. Indeed, the new version of NS-2 (i.e., NS-3) is still under development and do not provide as much features as NS-2 does. 8 As far as we know, there are no simulation tools dedicated to satellite networks. However, adapting the capacity and delays of satellite links and gateways characteristics is not an issue with the simulators presented above On the difficulty to simulate the protocol stack As it is highlighted in Sections and 3.2.2, network protocols can be evaluated by measures on testbeds or simulations. Even though their results are accurate and relevant, satellite testbeds suffer from many drawbacks. The access to the media might be expensive without specific rights. Also, without the super-user rights, exotic simulations can not be launched and the protocols of different layers can not be modified much. The authors of [65] argue that The cost inefficiency of this method does not involve only the technology cost but also the distributed man-in-the-loop manipulations and synchronization required. Moreover, it is sometimes impossible to use this approach simply because the new technology support is not yet validated or available, e.g. when developing an application over a new satellite transmission technology that is not yet operating. Moreover, the level of realism provided by satellite testbeds might not be necessary for the study of high layer behaviours. Indeed, Gurtov and Floyd claim that a better trade- 4 See: 5 See: 6 See: 7 See: 8 In the rest of this document, we provide developments in NS-2, however I would advise further implementations to be made in NS-3 to contribute to its development. 52
71 off between generality, realism and accurate modeling can be found to improve transport protocol performance evaluation [66]. Following this idea and the need for cheap (in terms of simulation time, and computer process) and realistic evaluation tools, this document argues for methods to integrate low layers in the high-level network simulator NS-2 (trace based and event driven approaches). Based on the information gathered in this section, these methods can provide the following benefits: consideration of latest implementation for each layer; low cost in terms of simulation time; modular architecture; easy evaluations of cross-layer agent solutions Sublayer fragmentation in NS-2 In this section, we present the basic structure of NS-2 code to ease the understanding of the following NS-2 modules description. NS-2 is an object oriented network simulator written in C++: it is composed by many classes (such as Node or Queue...) and, once compiled, the simulation parameters (network topology, traffic...) are defined in one OTcl file by the user. The OTcl interpreter creates the objects and the call for protocols at different layers (network, transport, application). Figure 3.4 illustrates that the function enqueue() is called when a datagram arrives in the queue. When the channel is idle, the function dequeue() is called to transmit the datagram chosen depending on the queuing mechanism. The queuing system in NS-2 is mainly driven by the following entities: datagrams (with arrival times and services times attributes) and queue (with empty and non-empty attributes). Sublayer modules does exist in NS-2, however instead of triggering them for your purpose, we adapt the sending buffer reordering and dates at which deque() is called. We model the behaviour of both link and physical layers by delaying the transmission of datagrams and/or introducing drop events. To introduce a new queuing policy, a user can redefine enque() and deque() methods in two files (i.e., newqueuing.cc and newqueuing.h) in the queue/ sub-directory of the NS-2 source. 53
72 Node path path Node path Node path Node path Node NODE NETWORK NODE NETWORK Sending buffer enque ( ) deque() path Receiving buffer path Figure 3.4: enque() and deque() methods in NS-2 In newqueuing.cc, the NEWQUEUING flag must be specified in the definition of the class in newqueuing.cc to link the OTcl and C++ code. static class NEWQUEUINGClass:public TclClass { public: NEWQUEUINGClass() : TclClass("Queue/DropTail/NEWQUEUING") {} TclObject* create(int,const char*const* argv) { return (new NEWQUEUINGQueue(argv[4])); } } class_clift; The next line illustrates how to launch simulation with NEWQUEING as queuing mechanism and with args0 as input parameter given by the OTcl file. ns simplex-link N1 N2 Bd D DropTail/NEWQUEUING args0 54
73 Following this idea, modeling the lower layers in NS-2 is not an issue and enable to lead cross-layer evaluations which necessity is highlighted below. 3.3 Link layer reliability schemes and TCP for 4G networks In the context of 3G/4G satellite and Land Mobile Satellite (LMS) channels, the authors in [1, 2] show that the variable conditions on the mobile radio channel and the long error bursts introduced at the physical layer result in data losses, due to the inability of the error correcting schemes to handle such conditions. The implementation of physical layer schemes is commonly linked to specific hardware, making it ill suited to modifications after the design or deployment of the system. To overcome the extremely challenging conditions in mobile satellite environment, reliability schemes can be introduced at the link layer in order to recover data that the physical layer may not be able to rebuild. The commonly used reliability schemes, (such as Forward Error Coding: FEC, Selective- Repeat Automatic ReQuest: SR-ARQ (denoted ARQ) and Hybrid-Automatic ReQuest type II: HARQ-II, denoted HARQ) have been extensively studied for the physical layer [17, 6]. However, the performance of these schemes may differ at the link layer, e.g., error correction coding schemes can utilize the available information on the error locations (i.e., they would be used as erasure coding schemes) and would hence provide improved performance for the same level of overhead provided by the scheme. Initial investigations of such schemes, reported in [5, 4], rely on models of physical layer, that by their nature include approximations of real world conditions. Therefore, further investigation of performance in realistic conditions that can only be achieved by considering real physical layer performance is needed. Additionally, previous studies do not evaluate the satellite component of 4G networks, which significantly differs from the terrestrial wireless component [1, 2]. Finally, we note that the performance of link layer schemes should not be considered in isolation. Any combination of physical and link layer schemes will jointly result in an overall link performance, that can be characterized by a certain magnitude (and distribution) of errors and end-to-end delays. This performance will impact the performance of transport protocols and, ultimately, the applications using them and also needs to be evaluated under the same realistic conditions. 55
74 The goal of this study is to evaluate the impact of reliability schemes introduced at the link layer, for the satellite component of 4G services, on the variants of TCP that have proposed for use in such services. The trace-based tool can be shortly defined by the following statement: we propose to load link layer traces in an NS-2 module in order to schedule the transmission of the datagrams. As a results, we expect to load realistic physical layer traces (based on the most recent codes at the physical layer) and compute link layer traces depending on the reliability scheme introduced. This approach enables us to overcome the extremely challenging conditions in mobile satellite environment, reliability schemes can be introduced at the link layer in order to recover data that the physical layer may not be able to rebuild. In order to better position our proposal, we propose in this section a related work about cross-layering issues and present existing tools allowing to drive cross-layering analysis. To ease the reading, we have sliced this related work following the objectives and the layer targeted by each studies From 3.9G to 4G 4G is the fourth generation of mobile communication standards. The considered applications running over 4G include mobile web access, IP telephony, gaming services or highdefinition mobile TV. This application context requires very high bandwidth and mobility. In 2008, the International Telecommunications Union-Radio communications sector (ITU- R) 9 introduced requirements for 4G standards, i.e., the International Mobile Telecommunications Advanced (IMT-Advanced) [16]. The peak speed requirements include 100 Mbps for high mobility transmissions (from trains or cars) and 1 Gbps for low mobility transmissions. We note lots of problems to determine what is considered 4G 10, and service providers often branding a service as 4G, regardless of the lower data rate offered and non-compliance to IMT-Advanced requirements. ITU-R recognized that Mobile WiMAX [67] and LTE-Advanced [68, 69] support could be considered 4G as they propose important improvements compared to 3G networks 11,12. 9 ITU-R ensure the rational, equitable, efficient and economical use of the radio-frequency spectrum by all services, including those using satellite orbits. see LTE-Advanced :
75 LTE-Advanced was proposed by NTT Docomo 13 and WiMax by WiMax Forum 14. As a result, we can assume that the performance requested by ITU-R is achievable and service providers around the globe start to implement these standards. In the rest of this document, we denote by 4G network a standard compliant with the IMT-Advanced requirements. IMT-Advanced also includes a satellite link component, for provision of 4G services to remote areas. In fact, several service providers are currently offering 4G using satellite services, like Telstra 15 in Australia or Xplornet 16 in Canada, demonstrating the feasibility of such services. In this document, we focus on data transmission using a single satellite link, to a receiver located in a remote area covered by the 4G satellite network MAC/PHY considerations As previously noted in the introduction, due to the mobility of the receivers, long bursts of bit errors can prevent the physical layer codes from decoding useful data. As a result, in LTE-Advanced cellular systems, H-ARQ is introduced at the link layer to recover the lost data [3, 4]. A number of different techniques are considered to recover the lost data. As an example, in [5, 6], the authors propose a performance evaluation of a hybrid FEC/ARQ (HARQ) analytical model, however they do not address the impact of the bursty aspect of the channel at the link layer. Also, in [70], the authors focus on efficiency criterion of Automatic repeat request schemes throughput and computational complexity. They present throughput expressions in memory-less channel for the diverse reliability schemes at the physical layer. However, their results cannot be directly exploited as the error model presented is not applicable to mobile satellite links. In [71], the authors present an analytical model of Hybrid ARQ techniques on Discrete Time Markov Channels using an appropriate Markov chain, which tracks the transmission outcome and can be used to evaluate several performance metrics, including throughput, 13 NTT DOCOMO is Japan s premier provider of leading-edge mobile voice, data and multimedia services. With more than 60 million customers in Japan, the company is one of the world s largest mobile communications operators. see 14 The WiMAX Forum s primary goal is to accelerate the adoption, deployment and expansion of WiMAX technologies across the globe while facilitating roaming agreements, sharing best practices within our membership and certifying products. see 15 Telstra is Australia s leading provider of mobile phones, mobile devices, home phones and broadband Internet. See 16 Xplornet Communications Inc. (formerly Barrett Xplore Inc.) is Canada s leading rural broadband provider. See 57
76 loss probability, number of retransmissions, and delay. We propose the same approach to validate our proposal, but we need to control the reliability scheme parameters to introduce FEC and ARQ, which cannot be done with the tool the authors present in [71]: we need to adapt their expressions to our context Simulate the impact of PHY/MAC on transport layers As pointed out in [72], when satellite links are part of the network, standard transport protocols are sub-optimal, and the satellite network and transport protocols should be designed jointly. We believe that adapting the reliability schemes at the link layer could greatly increase the performance of transport protocols over satellite links and propose tools to aid this design. In [73], the authors detail a physical layer simulator in order to make the link between MAC and physical layer in ns-3. They assess the benefits provided by mixing physical layer tools and upper layers of network simulators, and propose a solid approach to bridge the gap between those layers and enable cross-layer studies. However, many protocols are yet to be implemented for this new version of ns-3 to be completely operational. Our proposal, CLIFT, is not a physical layer simulator (as opposed to [73]) but a way to take into account physical layer traces inside a network simulator. We propose to separate the generation of measured or simulated traces and the network simulator functionality, as this enables an increase in the adaptability of the tool. In [74], the authors present a wireless link and network emulator for the Wireless IP 4G system proposal from Uppsala University and partners. They introduce the Wireless IP system, describe the emulator design and implementation, and presents experimental results with TCP in combination with various physical and link layer parameters. Thereby, they only consider ARQ mechanisms at the link layer and the presented testbed does not introduce satellite links: we can expect to obtain more erasures at the link layer and to have a different impact of link layer reliability schemes. For mobile satellite conditions, to the best of our knowledge, there are no existing 4G testbeds that could enable the same measurements as those we propose in this document. As far as we know, there is no open source simulation tool to lead extensive performance evaluation of link layer retransmissions impact on TCP. 58
77 3.3.4 Impact of MAC layer on transport layer performance Introducing redundancy at this level can prevent the transport layer from decreasing its congestion window in case of isolated errors. The interactions between transport and link layers have already been studied in several articles [7, 8, 9], but the specifically high erasure in our context and the high performance of the most recent transport layer protocols let us believe that further works are needed in this topic. Indeed, In [10], the authors focus on the window flow control mechanism of TCP, and provides an exact model for a hybrid spaceterrestrial system with transport layer Additive-Increase-Multiplicative-Decrease (AIMD) protocols and satellite link layer ARQ. The results of their study prove that in most cases, implementing ARQ at the satellite link layer can significantly improve TCP performance. Also they show that the system performance can be improved if the protocol choices are significantly made. Thereby, they consider out-dated congestion control algorithm (TCP Tahoe and TCP Reno) and basic ARQ at the link layer. In [11], the authors propose an analytical model to assess the performance of a TCP Tahoe on reliability schemes at the link layer and lead simulations under ns-2. They show that there is no need for FEC, and that ARQ alone is able to realize the best performance. We have to check if we can extend their results when the physical layer errors is high (mobile receivers) and the transport protocols are more recent or delay based. Even if the most common link layer reliability scheme is ARQ, the authors in [12] illustrate the good performance of HARQ at the link layer when there are satellite links in the network, which increase our will to investigate this solution for mobile receivers Solutions to improve the performance of TCP Recent studies propose a number of modifications of the TCP protocol (mainly in Congestion Avoidance and Fast Retransmit phases), to improve the TCP performance in mobile or long-delay links. Transport protocols are still an active research topic and the presented list of extensions is by no means exhaustive. We focus on the implemented transport layer protocols: indeed, as illustrated in [75], extensions of CUBIC enable a solid performance in the context of high Bandwidth Delay Product (BDP) paths. Several cross-layer designs have been implemented to improve the performance of transport layer protocols over wireless links [76, 77, 78, 79] and satellite link when Performance 59
78 Enhancing Proxy (PEP) is enabled [80, 81]. However, cross layer solutions proposed can be hardly implemented in real world, and the cross layer impact of protocols implemented at different layer should be investigated on, before proposing cross layer schemes, unless in close environment Benefits of our approach: realistic cross-layer measurements Although there are a large number of proposals for improving the performance of transport protocols on links with a high BDP, most have no real world implementations in any operating system. This includes both the cross-layer proposals and (TCP based) transport layer solutions. Therefore, to provide a novel study of the impact of various reliability schemes at the link layer in the context of 4G satellite channels, we propose to assess the performance of the main TCP protocol variants that have available implementations (TCP NewReno, TCP Westwood, TCP Compound, TCP Hybla and CUBIC), aided by realistic physical layer traces. Introducing retransmissions at different levels of the protocol stack is a complex problem. Considering that the long RTT on satellite path, by the time the erasure event at the link layer is registered at the sender, the transport protocol may have already reduced its congestion window. We also note that retransmissions at these layers are not a matter of choice, as they are enabled by default in current 4G systems. In this document, we primarily address the benefits provided by the transmission of redundancy blocks with HARQ-II mechanisms at the link layer. 3.4 On the need for evaluations of the impact of channel access methods on TCP for DVB-RCS2 links The Digital Video Broadcasting Project (DVB) is a consortium committed to designing standards for video and data service. 17 The satellite service of DVB is divided between the transmission system (from providers to users) and the control channel (from users to 17 More details can be found in 60
79 providers). The current standards for the satellite links are DVB-S2 (diffusion channel) and DVB-RCS (return channel). In 2011, DVB validated the first two (of three) specifications for the second generation of the satellite return channel, DVB-RCS These first two specifications are detailed in [27], that present an overview of the system description, [28], that details the standards for satellite lower layers, and the third one in [29], that details higher layers specifications. The specifications of DVB-RCS2 [28] have also been submitted to the European Telecommunication Standards Institute (ETSI) for formal standardisation. This new version for the transmission of data from home users to satellite gateways features enhanced security, improved Quality of Service and support for IPv6. However, the performance of TCP is dependent on decisions made at lower layers. Also, as shown in [30], the performance of access schemes is strongly linked to the traffic characteristics (e.g., size of the files transferred). Therefore, introducing Internet based service in DVB-RCS2 requires adequate measurements to determine the access channel strategy that optimize the performance and enable the maximum level of fairness between the users. We propose to adapt the transmission time of the datagrams in NS-2 by modeling the events that can occur both at physical and link layers. We model the fragmentation of link layer data depending on the channel allocation scheme and the load on the network. This enable us to have flexible simulation parameters for the different component of the network and the channel access methods in the context of DVB-RCS2. In order to better position our proposal, we propose in this section a related work about the tools that enable to simulate the DVB-S2/RCS network, and the studies existing on the impact of access methods and transport layer protocols Notations On an MF-TDMA link, the capacity is shared at the Access Point. The access point forwards traffic between one or more satellite gateway and satellite terminals (home users) over the shared medium, therefore covering both up and down link scenarios. For clarity, we provide some definitions of the terms used to describe MF-TDMA processes: 18 Information taken from PRESS RELEASE- DVB-RCS2 MEETS WITH APPROVAL available at 61
80 Flow: data transfer at the transport layer; Frame: time frequency set of bursts data transmitted between gateways and users, generated every T F ; Datagram: network layer segment of a flow; Link Layer Data Unit (LLDU): N data bytes of a fragmented datagram; Physical Layer Data Unit (PLDU): LLDU with an optional N repair recovery bytes (N = N data + N repair ); Block: PLDUs can be further split into N block blocks if the access method requires; Slots: element of a frame where a block can be scheduled. In the following we present the channel access strategies defined in the current specifications [28] and highlight the need for further simulations on the impact of channel access methods on the performance of TCP when DVB-RCS2 is used for Internet traffic Channel access on the return link The channel access methods define the way data can be transmitted between the satellite gateway and the home users (Satellite Terminals, ST): the satellite medium is shared among the users and protocols must allocate resources to each of them. To do so, the resource is distributed every T F = 45 ms by the Network Control Center (NCC). The NCC is the element that adapts the repartition of the available slots at each frame, in order to (1) accept late-comer flows; (2) adapt the time slots reservation depending on each user characteristics (different priority between the users); (3) adjust the distribution of time slots depending on the network load; (4) optimize the mod-cod (modulation and coding) at the physical layer for dedicated access methods [27]. The NCC transmits a Burst Time Plan (BTP) to the users and indicates when and how to transmit data. Several timeslots are available per frequency. A time frequency block is called a frame [28, p. 157, sec ], detailed in Figure 3.5. We denote by N S the number of time slots available per frequency. The frequencies on which data is transmitted can be divided depending on the access method: F R frequencies are dedicated to the random access methods and F D are reserved to the dedicated access 62
81 frequency Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 time frequency Frame 3 login control F R random access F D determinist access time slot N S time slots [Duration: T F ] time Figure 3.5: Times Frequency block description methods. In total, a frame can carry N S (F R + F D ) slots. The BTP specifies which timeslots each user can transmit data on [28, p. 42, sec ]. The resource distribution depends on the nature of the flows, the network load and the access methods. The standard defines two strategies for channel access: Dedicated Access [28, p. 131, sec ] and Random Access [28, p. 125, sec ]. 63
82 Dedicated Access With Dedicated Access methods, ST and terminals transmit data on reserved timeslots which no other terminal uses. Depending on the load on the network, the NCC computes an adequate BTP for each ST having requested for satellite capacity and established the connection [28, p. 132, sec ]. Therefore, these methods ensure a reliable transmission of data but add a negotiation delay of at least one RTT. The reservation ensures that capacity is fairly distributed: if there are 40 slots available and 10 users, each user can transmit data on 4 slots Random Access With Random Access methods, traffic from different ST might overlap in one slot. The NCC can not ensure that different terminals transmit data on the different time slots, preventing a reliable transmission. Stronger error codes are therefore introduced at the physical layer: N repair redundancy bytes are added to the reduced N data bytes to form a code word of N = N data + N repair bytes that are split into N block blocks. N ra slots form a Random Access block (RA block) on which erasure codes are introduced [28, p. 126, sec ]. Each transmitter randomly spreads its N block blocks across the N ra slots of the RA block for spectral diversity. In [82], the authors define guidelines to design Random Access methods, and assess the performance of CRDSA [38]. Among the different Random Access methods, we can cite the following: Multi-Slots Coded ALOHA (MuSCA) [35], ALOHA [36], Diversity Slotted ALOHA [37], Contention Resolution Diversity Slotted ALOHA (CRDSA) [38]. One of the main advantage of random access methods is that the resource reservation request is not needed which reduces the access delay. Data is transmitted right after the connection establishment between the NCC and the ST. Performance of random access methods can be described by the probability that a receiver decodes its N data useful bytes depending on the number of users that transmit data on the RA block. Table 3.3 shows a generic example of such a description where P i,j is the probability that a packet cannot be recovered by the receiver when there are N U [NbUser j ; NbUser j+1 ] users on the RA block and and the signal-to-noise ratio of the channel is Es/N0 i. 64
83 Table 3.3: Random access method performance 0 NbUser 1 NbUser 2 NbUser 3... NbUser 26 0 Es/N0 1 P 1,1 P 1,2 P 1,3... P 1,26 0 Es/N0 2 P 2,1 P 2,2 P 2,3... P 2, Es/N0 X P X,1 P X,2 P X,3... P X,26 0 In [28, p. 126, sec ] the authors advise that the ST shall by default not transmit in contention timeslots for traffic, but may do this when explicitly allowed by indication in the Lower Layer Service Descriptor or by other administrative means making the random access methods used mostly for log in procedure [28, p. 182, sec ] and optionally for traffic. Moreover, in [28, p. 42, sec ], the specifications defines that the Terminal Burst Time Plan Table version 2 (TBTP2) may be used to assign dedicated access timeslots, [... ] allocate timeslots for random access and indicate the access methods used for each timeslots information to the BTP. Therefore, the specifications present the potential for introducing both random and dedicated access methods without explaining in which proportion the timeslots of the frames must be divided between them, even though [28, p. 126, sec ] highlights the preference for dedicated access methods Comparison between dedicated and random access methods With dedicated access method, the channel is reserved to the user and it enables the NCC to chose an optimal modcod. As a result, the use of satellite link capacity is optimized. It follows that 1) the communication is reliable and 2) the throughput is maximal. With random access methods, there is no step of resource reservation request and thus reduce the delay of access to the link. The modcod can not be optimized, as the channel between the home user and the gateway is not known. Moreover, in Why latency matters to mobile blackhaul, the authors explain that Google measured that an additional 500 ms to compute (a search) [...] resulted in a 25% drop in the number of searches done by users. This raises our interest to evaluate the 65
84 transmission times of small HTTP requests with random access methods Existing proposals to simulate the DVB-RCS2 link A model of the DVB-S2/RCS satellite network in NS-2 has already been proposed in [83]. The authors present an NS-2 module that attempts to be as close as possible to the real system by its behavior and the layout of its components. Their model uses a separation into two queues to simulate random and dedicated approaches. However, we need to study the impact of the capacity dedicated to each access method, and integrate specific inputs, such as performance of experimental random access methods, other internal parameters of the NCC component, which is not possible with the module presented in their paper. Therefore, their approach does not enable enough flexibility for our study. In [84], the authors propose MFTDMA-DAMA, a set of modules for NS-2, that enables to lead simulations on heterogeneous networks containing DVB-RCS links. This complex model might be accurate, however it can not introduce major changes on the channel access strategies, which is the topic we focus on here. To the best of our knowledge, no other DVB-RCS2 model exist that is flexible enough, in terms of infrastructure modifications Existing studies on the impact of access methods on TCP In this section, we present recent studies that assess the impact of channel access strategies on the performance of TCP TCP and Dedicated Access In [30], the authors analyse the interaction between the TCP and DVB-RCS control loops by exploring the performance of TCP over the different capacity allocation categories defined in the DVB-RCS standard. They show that the performance of access schemes is strongly linked to the traffic characteristics such as the size of the flow, or the required QoS. In [31], the authors highlight that, in the context of Demand Assigned Multiple Access (DAMA), 19 delay variability severely impacts the performance of TCP. They run simulations with NS-2 and analyse the MAC-TCP interactions to improve the performance 19 Demand Assigned Multiple Access (DAMA) is a satellite access technique that matches user demand to available satellite capacity. See 66
85 of TCP New Reno. Then they propose a cross-layer technique based on queueing sizes at the MAC layer. Their proposal is not adapted to short flows nor to the DVB context and focuses only on dedicated access methods. In [32], the author analyses the performance of competing TCP flows using different return channel satellite terminals (RCSTs) and competing for the DVB-RCS return link through DAMA mechanisms. They consider an emulated network and observe relatively poor performance. They only focus on the dedicated access method. The studies presented above [30, 31, 32] focus on dedicated access methods and highlight the difficulties encountered by TCP to transmit data on the return link. They show the negative impact of the queuing delay introduced by access methods, but do not study the impacts of random access methods TCP and Random Access In a mobile context (mobile cars and satellite links), the authors of [85], show that, when users act as senders, random access methods are not suitable; however, depending on the size of the file transmitted, there is a certain interest for dedicating more timeslots for random access methods when the users act as receivers. Their results can not be exploited in our specific context because of the model for satellite mobile links and the different capacities and access strategies do not apply. However, following this idea, the authors of [33] highlight a possible advantage for introducing more random access methods in DVB- RCS2. More recently, the authors of [34] assess the issues encountered by TCP over CRDSA in the context of DVB-RCS2. They conclude that the recent improvements in performance of random access methods should be considered in the determination of transport layer performances. However, they do not conduct extensive simulations neither in terms of traffic considered nor channel access strategies, which make their studies insufficient to properly determine the parameters of the DVB-RCS2 links. The analysis of the bad performance of TCP on high variable delays is relevant [30] and reducing the connection establishment delay with random access methods might be of interests [33, 34]. However, existing studies do not enable to provide a definitive answer as to whether introducing random access methods to carry data traffic or not. As far as we know, there is a lack of relevant comparisons of the various channel access strategies in the context of DVB-RCS2. 67
86 3.4.5 Benefits of our approach: flexible access methods Section concludes that even though several studies assess the performance of TCP on random access methods, there is a lack in the literature of studies that compare the performance of dedicated and random access methods in the recently proposed DVB- RCS2 architecture. Indeed, Section highlights that the current specifications enable to implement both random and dedicated access methods, however, there is no strong decision on whether to use one or the other. Finally, Section shows that there is no existing tools to simulate the DVB-RCS2 links that enable important changes in the algorithm introduced in the channel distribution process. 3.5 Router queuing delays and Less-than-Best-Efforts (LBE) traffic on satellite links Recently there has been a renewed interest in exploring Less-than-Best Effort (LBE) access in the Internet research community and standards bodies. LBE, also known as the Scavenger class of traffic, came into existence almost a decade ago with work being carried out at Internet2 [86]. The Internet Engineering Task Force (IETF) has started focusing on LBE congestion methods [41] to transmit background data. Various congestion control mechanisms have been pointed as good candidates to support LBE traffic, such as the delay-based TCP Vegas, or NF-TCP [87]; a more complete survey can be found in [41]. Due to its recent standardization at IETF, LEDBAT [46] however seems to be the most promising LBE mechanism Using extra bandwidth for traffic with LEDBAT on satellite links A recent paper [42] proposes the use of LBE access to provide free Internet access. The idea is to leverage the unused capacity to carry signaling or non-commercial traffic with an LBE protocol. We aim to explore the performance of the Low Extra Delay Background Transport (LEDBAT) [46] over large bandwidth delay product (LBDP) networks. We specifically 68
87 verify the use of LEDBAT to transmit LBE traffic over congested satellite networks and identify the performance implications of LEDBAT traffic sharing the network with other widely used congestion controlled transport protocols. Indeed, the authors of [87] have shown that LEDBAT is unfair with TCP 20 when the BDP is large (e.g., RTT of 100 ms and capacity of 600 Mbps) LEDBAT Algorithm LEDBAT is characterized by the following parameters: target queuing delay (τ), impact of the delay variation (γ = 1/τ), minimum One-Way Delay (D min ) and current One-Way Delay (D ack ). For each ACK received, the new congestion window (cwnd) value is updated according to: cwnd = cwnd + γ(τ (D ack D min )) (3.1) cwnd LEDBAT congestion control is based on queuing delay variations (i.e., the queuing delay is used as a primary congestion notification), estimated by (D ack D min ). When the size of the queue is large (τ < (D ack D min )), LEDBAT reduces its congestion window. Therefore, the target queuing delay τ embodies the maximum queuing time that LEDBAT is allowed to introduce. When a sender using the LEDBAT method sends its first packet, if the network is loaded, the minimum queuing delay can be overestimated causing the maximum value of the One-Way Delay (D max, estimated as D max = D min τ) to be higher than other LEDBAT flows that were already transmitting data. This bad estimation of D min introduces what is called the latecomer s advantage. In [88], the authors illustrate this phenomenon and propose a multiplicative-decrease solution to this problem. However, the RFC ignores this problem stating that system noise may sufficiently regulate the latecomer s advantage. In [89], the authors identify the negative impact of route changes on the performance on LEDBAT. The paper provides an analysis of the phenomenon without concrete solution to the problem. As for the late-comer s advantage, this problem is linked to an overestimation of the minimum queuing delay when the first packet is transmitted. No solution has yet been proposed to overcome this problem. 20 It is worth noting that they use TCP Reno, which is known not to be aggressive enough on LBDP paths. 69
88 There is a clear lack of deep investigations on the performance of LEDBAT in the context of long delay path. We expect to verify with simulation if LEDBAT suits to support LBE traffic in long delay path context. 70
89 71
90 72
91 Chapter 4 Realistic cross-layer evaluations in 4G : on the impact of link layer reliability schemes on the performance of TCP In this chapter, we study the impact of reliability mechanisms introduced at the link layer on the performance of transport protocols in the context of 4G satellite links. The implementation of physical layer schemes is commonly linked to specific hardware, making it ill suited to modifications after the design or deployment of the system. Reliability schemes can be introduced at the link layer in order to recover data that the physical layer may not be able to rebuild. Further investigation of impact of link layer schemes on the performance of transport protocols in realistic conditions that can only be achieved by considering real physical layer performance is needed. Specifically, we design a software module that performs realistic analysis of the network performance. The software module is composed by two main components: Trace Manager Tool (TMT) [13] that, based on physical layer traces, produces equivalent link layer traces; as a function of the chosen link layer reliability mechanism; and Cross-Layer InFormation Tool (CLIFT) [14] that loads the link layer traces in the NS-2 network simulator. The results presented in this chapter are published in International Journal of Satellite Communications and Networking [15]. This chapter is organized as follows. We present the validation process and outline the cross-layer simulation set in Section 4.1. We provide the characteristics of the satellite links depending on the scenario in Section 4.2. We present the results for the Distribution scenario, characterized by low error rate at the physical layer, in Section 4.3. Section 4.4 presents results for the Interleaved Internet scenario, where we consider a noisy channel. That is also considered in Section 4.5, where results for the Non Interleaved Internet scenario are presented. We propose a discussion in Section
92 4.1 Cross-Layer Information Tool (CLIFT): link layer reliability schemes on physical layer trace and integration in NS-2 The main idea behind CLIFT is to schedule the transmission of datagrams depending on the lower layers characteristics by loading physical layer traces and implementing link layer reliability schemes on them. Models may exist in different areas, however they might not be accurate nor public, which limits their use for research purpose. Measuring traces can be an easy way to overcome the channel estimation problems and increase the reliability of the results obtained. As a result, CLIFT enables to avoid the problem of modeling the physical layer path behaviour in terms of signal strength evolutions and interferences, which can be complex in the context of mobile users and satellite links. We present the main internal components of CLIFT in Section 4.1.1, the physical layers traces in Section 4.1.2, the link layer Trace Manager Tool in Section 4.1.3, the NS-2 module in Section 4.1.4, the definition of a simulation in Section and the extendability of CLIFT in Section CLIFT main internal components Before diving into the software details, we propose in this section to firstly present the overall structure of CLIFT and the linkages between each internal component. CLIFT requires inputs for each channel (cf. Figure 4.1). The Physical layer trace: traces illustrating the number of useful PLDU that are successfully received and the corresponding error probabilities. More details can be found in Section The Parameters file: file containing the different parameters of the link that are used in various components of CLIFT. More details can be found in Section CLIFT is based on three main components (cf. Figure 4.1). The link-layer component: for each link of the network, CLIFT loads a given physical trace and a parameter file (containing link-layer parameters, such as the reliability scheme used or the size of the link layer data units). We explain in Section how reliability schemes at this layer can be taken into account by the use of Trace Manager Tool (TMT); 74
93 The NS-2 block component: we developed a queuing module in NS-2 that loads these link layer trace to schedule the transmission of transport layer segments. The NS-2 module implementation is detailed in Section 4.1.4; The metric evaluation block component: that provides the resulting measures (e.g.,transport layer throughput, link layer throughput efficiency, delay, etc.). INPUT CLIFT OUTPUT ns 2 scenario (TCL file) physical traces 1 parameters 1 Link Layer link layer traces 1 parameters 1 ns 2 Block Queue(1) Queue(i) ns 2 trace physical traces i parameters i physical traces n parameters n Link Layer Link Layer link layer traces i parameters i metrics link 1 metrics link i link layer traces n parameters n metrics link n Queue(n) metrics estimation metrics Figure 4.1: Structure of software Therefore, the main achievement of CLIFT is that real measured channel evolutions can be considered, while modeling such channels might lead to approximation and errors Physical layer traces format In this section, we thus focus on the physical layer trace format: any traces compliant to this format can be loaded in CLIFT. CLIFT accepts, as an input, several physical traces format: both measured (as those provided in CRAWDAD 1 ) or generated by a physical layer emulator [24] or a simulator [25]. Each Physical Layer Data Unit (PLDU) sent at the physical-layer level is characterised by a transmission date and a decoding time. The decoding time is composed of the different delays introduced by the reliability schemes at the physical layer (interleaving and recovery delay). We denote: t i as the transmission date of LLDU i ;
94 d i as the decoding time of LLDU i ; At t = RT T/2 + t i + d i, the physical-layer delivers LLDU i to the link layer, if there is no supplementary delay (congestion, queuing,...). Also, we consider that the LLDU is erased when d i = 0. In Figure 4.2, to ease the understanding of the link between transmission date and decoding time, we illustrate how they are affected by interleaving at the physical layer. The transmission date is linked to the bandwidth and the length of the code at the physical layer. The decoding time is linked to the duration of the interleaving, the channel state and the transmission time. Physical layer No interleaving Physical layer Interleaving depth : 3 packets decoding time (di) transmission date (ti) time decoding time (di) transmission date (ti) Figure 4.2: Physical layer traces: transmission and decoding times time Trace Manager Tool (TMT) and link layer reliability schemes In this section, we present the Trace Manager Tool (TMT) that takes as input physical layer traces detailed in Section to produce link layer traces with the same format, and the decoding times are adapted depending on the reliability schemes introduced at the link layer which are detailed in Section Presentation of TMT We present the Trace Manager Tool that implements the standard link layer reliability schemes such as ARQ, FEC and H-ARQ. 76
95 The input data of TMT consists in a list of parameters (characteristics of the reliability scheme) and the physical layer trace considered. We propose two ways to use the physical layer input, depending on the origin of the erasures: direct use of the physical layer trace: the physical traces are measured and erasure events occur at the link layer according to real channel evolutions and to the physical layer error codes compliant to 4G requirements; indirect use of the physical layer trace: erasures are introduced on one error-free input trace following a Gilbert-Elliott model as explained in Section TMT computes the equivalent output link layer trace according to the input trace and the chosen reliability scheme. We only keep useful data Link-Layer Data Unit (LLDUs) (and not repair LLDUs or retransmission) and adapt their decoding delay according to the reliability scheme chosen. Then, we compute the throughput efficiency (i.e. goodput) and the recovery delay. The principle is as follows: the decoding time of one erased LLDU is linked to the reliability scheme involved to estimate the time when the recovered LLDU must be sent. The supplementary time introduced by the link layer reliability scheme, denoted d i, is the time needed to obtain (t R ) and decode (d R ) LLDU R that enables the recovery of LLDU i : d i = t R + d R t i. If an erasure code is introduced (FEC or HARQ), LLDU R may enable to recover several lost LLDU i. We illustrate this trace management in Figure 4.3. At the receiver side, physical layer data units will be delivered to the link layer at RT T/2 + t i + d i. In the following section, we derive a model of the bursty erasure link layer, and the resulting throughput efficiency and packet recovery delay. We utilize this model to validate the results derived from the TMT module. It is worth noting that TMT implements reliability schemes considering data is transmitted on all the physical layer data units of the physical layer trace. The traffic which is later scheduled on the link layer trace, generated by TMT, may not use all the available LLDUs. The metric of throughput efficiency is therefore different from the satellite link utilisation: in this section, we measure and model the throughput efficiency at the link layer level, whereas the satellite link utilization is measured at the application layer level, considering the reliability schemes and headers introduced at different layers (i.e., physical layer coding ratio, transport layer headers, etc.). 77
96 Parameters file Physical Layer Trace TMT Link Layer Trace INPUT PARAM RTT LL scheme (FEC,ARQ,HARQ) LLDU(i) INPUT PHYS ti 0 TMT OUTPUT LINK ti tr ti+dr LLDU(i) LLDU(R 1) LLDU(R) LLDU(R+1) tr 1 tr tr+1 dr 1 dr dr+1 tr ti tr 1 tr+1 dr 1 dr+1 LLDU(R 1) LLDU(R+1) Figure 4.3: An overview of TMT In Section 4.1.4, we further explain how the transmission of datagrams is scheduled by these link layer traces Validation of TMT We cross-validate the implementation of TMT by comparing (1) the efficiency throughput and recovery delay measured on the output trace generated by TMT and (2) the theoretical efficiency throughput and recovery delay with a given link layer bursty erasure channel presented Channel model This section explains how we use the concept of a bursty bit error channel (physical layer) model to derive a bursty erasure Link Layer Data Unit (LLDU) model. We base our analysis on the algorithms presented in [90] to model link layer reliability schemes with a slight adaptation. In [90], the authors propose two methods to express the error 78
97 probabilities of an error correcting code over a bursty channel. In particular, they provide a complete expression and computation method for the bit error probability (i.e. at the physical layer). In our context, we need to modify these results by considering erasures at the link layer. A Gilbert-Elliott channel is commonly used to represent a bursty error channel at the physical layer. The good state probability (resp. bad state) presents an error probability P G (resp. P B ) and a changing state probability 1 α (resp. 1 β), as illustrated in Figure 4.4. In the good state (resp. bad state) errors occur with low (resp. high) probability, which illustrates the bursty aspect of the channel. 1 β α Good Channel State Bad Channel State β 1 α Figure 4.4: 2-states Markov chain We also use this model with corresponding erasure probabilities to simulate bursty erasures at the link layer as illustrated in Figure 4.5. Note that we did not represent lost datagrams as the recovery capacity of the network layer is linked to the reliability scheme introduced. In the context of satellite transmissions, this model is of interest as long bursts of erasures might occur. The erasure probability distribution during a transmission over a channel with memory can be analysed through a Gilbert-Elliott model. As this Gilbert-Elliott model applies to every LLDU, the totality of different erasure combinations over a number of LLDUs can be considered through a mathematical induction. We present the iterative methods used in the following analysis to determine P (m, n), the probability to have m erasures over n LLDUs. We denote by P G (m, n) (resp. P B (m, n)) the probability to have m erasures over n LLDUs and to be in the good state (resp. bad state) when the n th LLDU is received. In order to compute P (m, n), we drive a double mathematical induction over m and n, considering first the current state of the chain, and then the current erasure probability. Equations (4.1) and (4.2) have been proposed in [90] and detail how we determine P (m, n) and our Matlab implementation has been cross validated with the results presented in [90]. 79
98 IP packet Bursty error channel Network Layer Link Layer Physical Layer IP packet Bursty Errors Bursty erasure channel Bursty Erasures Network Layer Link Layer Bit error Source data packet Erased packet Figure 4.5: Bursty errors and bursty erasure models P (m, n) = P G (m, n) + P B (m, n) (4.1) P G (m, n) = P G (m, n 1).α.(1 P G ) + P B (m, n 1).(1 β).(1 P G ) + P G (m 1, n 1).α.P G + P B (m 1, n 1).(1 β).p G (4.2) Queuing delays and processing times are considered in standard link layer models. As our analytical model is designed for satellite links (with round trip times greater than 400 ms), the impact of these additional delays can be neglected in comparison to the round trip time delay. 80
99 Throughput efficiency and recovery delay We define the throughput efficiency by the ratio between the number of received LLDUs by the number of transmitted LLDUs and the recovery delay by the time needed to recover an erased LLDU. We evaluate the theoretical expressions for these two metrics in order to validate the development of TMT FEC: Forward Error Correction In the FEC scheme, the sender sends a combination of data and repair LLDUs. Let N D (resp. N R ) be the number of data (resp. repair) LLDUs and N = N D + N R. The process to recover data LLDUs is successful if at least N D LLDUs are received, otherwise (if the number of erasures is strictly greater than N R ) no correction is possible. the FEC scheme does not enable the retransmission of LLDUs. First, we define the throughput efficiency as the ratio of the received LLDUs and the total number of LLDUs sent: η F EC = ND i=1 P R (i).i N D + N R (4.3) where P R (i) represents the probability that i LLDUs are received. Over a bursty erasure channel, this is computed following the previously explained mathematical induction, considering various cases depending on the number of erasures and the correction capacity of the code. By implementing P R (i) in Matlab, we have checked that N D i=0 P R (i) = 1. Second, if a LLDU is erased, the additional delay will correspond to the time needed to receive the whole datagram (data and repair LLDUs) needed by the FEC scheme to evaluate whether this datagram can be recovered. This recovery delay, d, is related to the position of the LLDUs in the total datagram. On the average, we can consider the erasure to be located in the middle of the datagram. With this hypothesis, the recovery delay d for datagrams at the receiver can be calculated as: d = RT T 2 N 1 + p 1 P (i, N 1) N i=n R 2 T P (4.4) where T P is the time needed to receive a LLDU and p the global erasure probability. If the LLDU is lost, we add the time (second part of (4.4)) that corresponds to the time needed to receive the N LLDUs. On the average, we consider the erasure to be located in the 81
100 middle of the IP packet: this explains why, if an erasure event occurs, we add N 2 T P. Then we add this time if the LLDU can be recovered, otherwise the packet is discarded and its recovery delay is not considered as there are no retransmissions with FEC mechanism Interleaved FEC Interleaving is an efficient and commonly used technique to improve the data transmission over a bursty channel, as erasure bursts can be spread into a number of different codewords. It is possible to change the characteristics of the channel with (4.5) in order to consider the interleaving. Let p and ρ be the local erasure probability (probability for a single LLDU to be erased without considering the previous state of the channel) and the correlation between the states (considering a simplified channel with P G = 0 and P B = 1). (4.5) and (4.6) have been proposed in [90] and we propose to utilize them to model the erasures at the link layer level. We have: p = 1 α and ρ = α + β 1 (4.5) 2 α β If an interleaving with a depth I is used on this bursty channel, the authors of [90] obtain a new bursty channel with the following changing state probabilities α I and β I : α I = p + ρ I.(1 p) and β I = (1 p) + ρ I.p (4.6) The performance of interleaved FEC can be then obtained by applying parameters α I and β I in equations (4.3) and (4.4) ARQ: Automatic Repeat-reQuest Automatic Repeat-reQuest mechanism at the link layer consists in the retransmission of the LLDUs that have been lost during the transmission. The throughput efficiency (also called goodput which is, by definition, the application layer throughput) corresponds to the probability that a LLDU is received. In the context of high delay links, the channel probably changes its state before retransmissions are sent. Thus, we do not consider burst of erasures when using ARQ. Furthermore, we can neglect this notion as this scheme does not introduce correlation between different LLDUs of the same datagrams. recovery delay can be expressed as follows: Then, the 82
101 d ARQ = RT T + p i 1 (1 p)i.rt T 2 i=1 where p is the global erasure probability HARQ-II: Hybrid ARQ of type II This mechanism is a combination of the FEC and ARQ mechanisms and after the first transmission of a FEC block, including data and repair LLDUs, HARQ-II allows the sender to transmit additional repair LLDUs when a recovery is not possible at the receiver side. In other words, if no correction is possible at the receiver, the transmission of additional repair LLDUs is requested by the receiver. At each new transmission, the sender transmits more LLDUs than requested by the receiver: if the receiver requires n LLDUs to recover the data, the transmitter sends (n+n S ) LLDUs when N S is the number of supplementary repair LLDUs sent. Let R r be the probability that the data can be decoded after r retransmissions, T R (r) the time needed to receive the LLDUs of the r th retransmission, N D the number of data source LLDUs, N R the number of repair LLDUs, and N = N D + N R. For applications with time constraints, a limited number of authorized retransmissions, denoted by R, is considered. The throughput efficiency for HARQ-II is expressed as the ratio of the number of received LLDUs and the total number of LLDUs sent: η HARQ = ND i=1 P R (i).i j=1 P S (j).j where P R (i) is the probability that i LLDUs are received and P S (j) the probability that j LLDUs are sent. ( N D R 1 ) N D 1 P R (i).i = R z.n D + P R (i).i (4.7) i=1 z=0 i=1 Equation (4.7) represents the number of LLDUs received, which is N D if R retransmissions enabled the recovery correction of the useful LLDU (i.e., if R 1 z=0 R z, where R z is the probability that the z th retransmission enabled the reception of at least N D LLDU). R z is derived from expressions close to (4.3). If R retransmissions did not enable the recovery; the second part of the equation determines the number of LLDUs that have successfully been received during the first transmission. 83
102 With R = 2 (2 complementary transmissions are authorized), (4.7) can be calculated according to the following expression: P R (i, i < N D ) = P (N D i, N D ) with: N R l 1 =δ i,nd,n R (N D i)+n S N R +l 1 l 2 =N S +1 l 2 +N S i,nd,n R,N S,l 1,l 2,l 3 l 3 =N S +1 i,nd,n R,N S,l 1,l 2,l 3 = P (l 1, N R ) P (l 2, (N D i) + N S N R + l 1 ) and P (l 3, l 2 + N S ) δ i,nd,n R = 0 if (N D i) > N R (N D i) N R if (N D i) < N R 1 if (N D i) = N R We consider every combination of erasure positions to determine P S (j). For each complementary transmission, the number of repair LLDUs sent is linked to the current number of erasures. If there are n erasures at the first datagram (data and repair LLDUs) sent, and if the correction capacity of the code is N R, there are two possibilities: if n N R, no transmission of repair LLDUs is needed; if n > N R the receiver requests for n N R + N S repair LLDUs. The expressions (4.8) are given with R = 2 and with P (m, n). with: P S (j) = + + N N R l 0 =0 N S l 0 =N R +1 l 1 =0 N l 0 +N S l 0 =N R +1 l 1 =N S +1 δ N.P (l 0, N) δ l0,n.p (l 0, N).P (l 1, l 0 N R + N S ) δ l0,l 1,N.P (l 0, N).P (l 1, l 0 N R + N S ) (4.8) 84
103 1 if j = N δ N = 0 if j N 1 if j = N + (l 0 N R + N S ) δ l0,n = 0 if j N + (l 0 N R + N S ) 1 if j = N + (l 0 N R + N S ) + l 1 δ l0,l 1,N = 0 if j N + (l 0 N R + N S ) + l 1 In Equation (4.8), P S (j) represents the number of LLDU transmitted. Each part of this equation corresponds to one number of retransmission needed, depending on the erasure events which govern the retransmissions. If no retransmission is needed (first part) the number of LLDU transmitted is N, this event is characterized by the probability P S (N). For the retransmissions, the number of LLDU transmitted is related to the number of erasure that occurs and the parameters of HARQ. As an example, P S (N +2) is the probability to transmit N + 2 LLDU: (N during the first transmission) and ((2 LLDUs transmitted on the first retransmission) or (1 LLDU transmitted on the first retransmission and 1 LLDU during the second retransmission)). This is an example of the different cases that Equation (4.8) considers. In order to estimate the recovery delay, we have to consider both the time needed to receive the first FEC block (data and repair LLDUs), T R (0), and the additional repair LLDUs. This recovery delay, denoted d HARQ can be expressed as follows: with RT T T R (i). d HARQ = T R (0) + RT T 2 + R i.i(rt T + T R (i)) i= Cross-validation and illustration In this section, we measure the resulting throughput efficiency and recovery delay over link layer output. We then cross-validate the TMT tool results with the theoretical metrics presented in Section
104 Cross-validation For each state, we compute the theoretical metrics through the equations detailed in Section and the resulting metrics obtained with TMT. In the use case presented, the physical trace corresponds to a satellite data transmission with a duration of 500 seconds and has been provided by courtesy of CNES 2. As the physical trace provided is error-free, we thus introduce bursty erasures over this physical layer trace following the Gilbert-Elliott model. We present in Figures 4.6 and 4.7 the results obtained on a given set of parameters. The chosen parameters are: RT T = 500ms, N D F EC = 10, N R F EC = 12, N D HARQ = 5, N R HARQ = 7, α = 0.99, β [0.1; 0.98], which induced a global erasure probability p [0.01; 0.3] and a length of erasure bursts t b [1; 50] Throughput efficiency (%) FEC(10/12) Theory FEC(10/12) TMT ARQ Theory Probability to stay in bad state ARQ TMT HARQ(5/7) Theory HARQ(5/7) TMT Figure 4.6: Validation of efficiency throughput Both figures confirm that the theoretical expressions developed fit TMT results. Note that we only present a subset of our experiments and that several other set of parameters have been tested with success. 2 CNES is a government agency responsible for shaping and implementing France s space policy in Europe, see 86
105 Recovery delay (s) FEC(10/12) Theory FEC(10/12) TMT ARQ Theory Probability to stay in bad state ARQ TMT HARQ(5/7) Theory HARQ(5/7) TMT Figure 4.7: Validation of recovery delay Illustration We propose to exploit the theoretical expressions given in Section to compare the three recovery mechanisms in terms of recovery delay and throughput efficiency over a bursty channel. For the simulation, we use the following parameters: RT T = 500ms, N D = 38, N R = 13, R = 2, α = 0.98, β [0.1; 0.93], which induces a global erasure probability p [0.01; 0.3] and a length of erasure bursts t b [1; 14]. Please note that the interpretation of the following results is limited to the given parameters. The results presented in Figure 4.8 have been obtained with MATLAB. We note that ARQ and HARQ can transmit supplementary LLDUs if the datagram cannot be rebuilt. In the context of satellite links, the delay resulting from the retransmissions impacts the data delivery and although these retransmissions enable the recovery of lost LLDUs at a later time, they may not be gainfully utilized by the time constrained applications (VoIP, streaming... ). When erasure occurrence is low, ARQ demonstrates better performance than HARQ as the transmitter does not send useless repair LLDUs. Therefore, when erasure occurrence is higher, HARQ introduces less delay thanks to the initial repair LLDUs. Although the transmission can be reliable with both ARQ and HARQ schemes, 87
106 Throughput efficiency (%) Erasure probability (%) FEC ARQ HARQ (a) Throughput efficiency Recovery Delay (s) Erasure probability (%) FEC ARQ HARQ (b) Recovery delay Figure 4.8: Illustration of TMT the introduced delay needs to be considered in the design of networks with time constraints. The theoretical models presented in this section allow a fast analysis of the performance of reliability schemes over various channels and can consequently assist the network designer with the choice of the most appropriate scheme to be used. In this section, we present how TMT implements the reliability schemes to produce the equivalent output of the link layer. We express the efficiency throughput and recovery delay through an analytical tool and these expressions are linked to the burst erasures that occurs in our context. Finally, we cross validate TMT and this analytical tool. Next section presents how we schedule the transmission of datagrams depending on this output of TMT NS-2 module inside CLIFT We schedule the transmission of the datagrams depending on the link layer traces. We introduce a new queuing module in NS-2 that loads these traces and determines when a datagram can be recovered by an upper layer (depending on the reliability schemes introduced) and sent Add a datagram in the queue: the enqueue() function One datagram is divided into m LLDU (LLDU n,...lldu n+m ). We denote E i as the enqueuing time of datagram i, T i as its the transmission time and D i as its transmission date. We look in the link layer trace for the LLDU that matches t n E i < t n+1. Over the m LLDUs, we compute D i = max k [n,n+m] (t k +d k ) T i. Indeed, max k [n,n+m] (t k +d k )+RT T/2 88
107 represents the date when datagram i is delivered to the receiver. We handle the case D i < E i since NS-2 is a event-driven simulator: for example, this event might occur when erasure codes are used, and bursts of LLDUs are forwarded to the upper layer. With a FEC code, if LLDUs are lost, they are all rebuilt at the same time with the reception of the N th R LLDU Remove a datagram from the queue: the dequeue() function As soon as a datagram enters the queue, we introduce a timer whose value is set depending on the transmission date of the LLDU the datagram is broken down into. The timer is defined to expire when there is a datagram to transmit. Therefore, at each expiration of the timer, the method dequeue() is called and the corresponding datagram is transmitted. We seek the next datagram to be transmitted and we set the timer for it to expire on the transmission date of this next datagram. Moreover, we re-initiate the timer value if: (1) a datagram is enqueued and there is no other datagram in the queue; (2) a datagram is enqueued and its transmission date is earlier than those of the datagrams in the queue; (3) a datagram has to be removed from the queue (timer expiration) and there are datagrams in the queue Packets sending and scheduling principle Figure 4.9 illustrates the problem occurring when LLDU reliability schemes overlap the datagrams in terms of channel occupancy. In the example of Figure 4.9, each datagram (denoted P 1 and P 2 ) is broken into 4 LLDUs (denoted P 1,1, P 1,2, P 1,3, P 1,4, P 2,1, P 2,2, P 2,3, P 2,4 ). The algorithms introduced at physical and link layers make that P 1,2 must be transmitted before P 4,1. However, the network layer point of view of NS-2 can not transmit P 2,1 before P 1,4. As a result, in this example, CLIFT adapt the transmission date of P 2 so that its transmission does not overlap with the transmission of P 1. In Figure 4.10 we detail the different cases we had to consider since NS-2 prevents one node from sending two datagrams at the same time. If one of the LLDU is erased, the whole datagram is dropped. Moreover the date of this event is linked to the reliability scheme introduced at the link layer. Indeed, the computed transmission date becomes the drop date. We also consider that a dropped datagram still uses the channel for its transmission and has to be considered in the scheduling. 89
108 SRC : input traffic link layer LLDU transmission date decoding times we compute the max(ti+di) output time of the slot +Xms Yms theoretical delivery at the IP level DST : output delivery at the IP level Figure 4.9: Two datagrams sharing channel in NS State of the queue Enqued datagram: (transmission date: Td) First test Td Not a solution Second test Td Not a solution Third test Solution Td Figure 4.10: Adaptation of the transmission date of the datagram Tcl scripts for CLIFT In this section, we present the different parameters that are introduced for each links. As detailed in Figure 4.1, this parameter file is used by the mechanisms introduced at the different layers. 90
109 The DropTail/CLIFT queuing policy is implemented in two files (clift.cc and clift.h) located in the queue/ sub-directory of the NS-2 source. The parameters considered in the simulation scripts are divided between global information (trace name, the output files names), physical layer InFormation (bandwidth, RTT...), link layer information s (reliability scheme introduced and its characteristics) and application layer information (simulations with video). In order to introduce CLIFT, the link between two nodes N1 and N2 can then be defined as: $ns simplex-link $N1 $N2 $bandwidth [$rtt_ / 2] \ DropTail/CLIFT $link_parameters_file where $link_parameters_file is the name of the file containing the parameters of the link Limits and extendability of CLIFT The main limit of CLIFT is that the physical layer traces are read before hand, which makes the estimation of the performance of adaptive link layer schemes not possible. This problem could be overcame by introducing trace management algorithms of TMT inside the NS-2 module. Section highlights that physical layer traces, measured in any context but compliant to a specific format, can be introduced and CLIFT. 4.2 Physical layer for 4G satellite links We consider the transmission of data between a satellite and a mobile user. The mobile user moves inside a suburban area at the speed of 60 km/h over 10 km. The satellite has a LTE waveform, transmitted in S-band with OFDM techniques. The bandwidth is 5 MHz with 300 available frequencies, the length of the FFT is 512. The satellite is in GEO orbit and has an elevation angle of 40. In the rest of this section, we denote by up-link the link on which data are transmitted from the mobile user to the satellite, and by down-link the link on which data are transmitted from the satellite to the mobile user. In Figure 4.11, we present the performance of the physical layer codes introduced for each scenario. We can assess the benefits provided by the interleaving at the physical layer. 91
110 We use physical layer traces that are produced by CNES OFDM/TDM simulation softwares [26], which includes realistic satellite link characteristics, such as the type of satellite orbit, error correcting codes, etc. We consider three main scenarios: satellite distribution (referred to as Distribution scenario in the rest of this section, e.g. in Figure 4.11): (1) on the down link, we introduce a Turbo Code 3GPP2 with a code word (before coding) of 1523 bytes; (2) on the up link, we introduce a Turbo Code 3GPP with a code word (before coding) of 33 bytes. The interleaving depth at the physical layer is 36 ms. We present the results of this scenario with Es/N0 = 14 db, i.e. P ER < 10 2 ; bi-directional Internet traffic with interleaving (referred to as Interleaved Internet scenario in the rest of this section, e.g. in Figure 4.11 ): we introduce a Turbo Code 3GPP with a code word (before coding) of 33 bytes on both up and down links. The interleaving depth at the physical layer of 36 ms. We present the results of this scenario with Es/N0 [5; 8] db, i.e. P ER [10 2 ; 10 1 ]; bi-directional Internet traffic without interleaving (referred to as Non Interleaved Internet scenario in the rest of this section, e.g. in Figure 4.11 ): we introduce a Turbo Code 3GPP with a code word (before coding) of 33 bytes on both up and down links. The interleaving depth at the physical layer of 0 ms. We present the results of this scenario with Es/N0 [5; 12] db, i.e. P ER [10 2 ; 10 1 ]. As detailed in section 4.1.2, each line of the physical layer traces corresponds to one LLDU and defined by a transmission date and a decoding time (time needed for the packet to be decoded at both physical and link layer). As the size of the physical layer unit is known, we measure that: for the Distribution scenario, the available capacity is 2.34 Mbps (up) or 2.25 Mbps (down) or and for the Internet scenario, the available bandwidth is Mbps. The capacity is less important with the Internet scenario, we consider that the bandwidth is fairly shared between 10 users: only one line out of ten available is used. The differences between the scenarios are: Distribution scenario considers low bit-error rates on the physical link; Interleaved Internet scenario considers high bit-error rates on the physical link and includes the benefits of interleaving on this link; 92
111 Non Interleaved Internet scenario has the same bit-error rates as the Interleaved Internet scenario, but has no interleaving Diffusion scenario Down Diffusion scenario Up Internet scenario interleaving 36 ms Internet scenario no interleaving ms PER Es/N0 Figure 4.11: Performance of physical layer codes in LTE context Figure 4.11 presents the Packet-Error-Ratio (PER) for different signal-to-noise ratio. The PER is the probability that the physical layer reliability scheme could not recover the encoded packet. We consider that the size of the LLDU is the same as the useful data that can contain one physical layer data unit: if a physical layer data unit is lost, the corresponding LLDU is lost. The link layer reliability schemes will try to recover this lost LLDU. In the following sections, we propose to measure the impact of various link layer retransmissions schemes on the performance of different transport layer protocols for each of these scenarios, i.e., (1) when there a few bit errors (2) when the bit error ratio increases (3) when there is physical layer interleaving. 4.3 Distribution scenario In this section, we propose to exploit the physical layer trace of the distribution scenario to assess the impact of ARQ and HARQ-II on the performance of various transport layer 93
112 protocols. We consider one FTP transmission between the mobile receiver and the satellite gateway. When the direction of the transmission is up, the mobile transmits data; when the direction of the transmission is down, the mobile receives data. The size of the IP packets is 1500 bytes. In this scenario, we have Es/N0 = 14 db which corresponds to a physical layer unit error ratio slightly lower than In Figure 4.12, we present the average throughput achieved by the different transport protocols using different reliability schemes, when the mobile unit transmits data to a server Throughput [Mbps] TCP NewReno CUBIC TCP Compound TCP Hybla TCP Westwood ARQ HARQ10/12 HARQ10/15 HARQ50/52 Figure 4.12: Throughput of different versions of TCP depending on link layer retransmission schemes (UP) Firstly, when TCP New Reno or TCP Compound are enabled at the transport layer, the throughput is lower than with other protocols. Also, we measure that for both TCP New Reno and TCP Compound, ARQ at the link layer overcomes HARQ-II in terms of achievable goodput. Secondly, we observe that TCP Hybla, CUBIC, and TCP Westwood reaches the capacity of the satellite link with ARQ scheme at the link layer. As a result, we measure that when HARQ(X,Y) is used at the link layer, the used bandwidth is multiplied by X. Y Indeed, congestion control of these protocols overcome the problems introduced by local 94
113 errors at the transport layer, as they have been designed without link layer considerations. TCP Hybla has an important increase rate of the congestion window. CUBIC decreases its congestion window by multiplying it by 0.8. TCP Westwood reduces the congestion window to a value computed with an estimation of the optimal window. To further illustrate the performance, we present in Table 4.1 the characteristics of the retransmissions at the transport layer when the transport layer protocol is CUBIC: we highlight that introducing HARQ-II at the link layer enable to reduce the number of transport layer retransmissions. Table 4.1: Distribution scenario: transport layer retransmissions with CUBIC Link layer Retransmission Maximum number scheme probability of retransmission ARQ HARQ(10/12) HARQ(10/15) HARQ(50/52) When the mobile receives data, the physical code word is much longer. As a result, based on the results presented in Figure 4.12, we can already assess that considering important coding ratio for HARQ-II would have a negative impact on the throughput achievable by transport layer protocols. In Figure 4.13, we confirm that the trends observed previously can be confirmed in this scenario context. As TCP Westwood, CUBIC and TCP Hybla have the best performance in terms of throughput, and as the delay measured for TCP Westwood and CUBIC are the same, we propose to observe the evolution of these delays for CUBIC and TCP Hybla in Table 4.2. We present the minimum measured delay, the delay needed to acknowledge 85% of the IP packets and the delay needed to acknowledge 99% of the IP packets. Contrary to what one might think, we observe that when HARQ-II is introduced, the delay is more important. Indeed, an IP packet (1500 bytes) is distributed among 47 LLDUs (33 bytes) and when ARQ is involved, the minimum time needed to transmit an IP packet is the time needed to transmit 47 LLDUs. As an example, when HARQ(10/12) is chosen, the minimum time needed to transmit an IP packet is the time needed to transmit 60 LLDUs. 95
114 Also, we note a difference between the delay measured when TCP Hybla or CUBIC are introduced: the important delay introduced by TCP Hybla can be explained the fact that its congestion window oscillates between 750 and 200 packets where the optimal congestion window (estimated by: RT T bandwidth/p ktsize) is 93 packets Throughput [Mbps] TCP NewReno CUBIC TCP Compound TCP Hybla TCP Westwood ARQ HARQ10/12 Figure 4.13: Throughput of different versions of TCP depending on link layer retransmission schemes (DOWN) Table 4.2: Average delay of IP packets (UP) Transport layer Link Layer Delay [s] protocol reliability scheme Minimum 85% of the packets 99% of the packets ARQ TCP Hybla HARQ(10/12) HARQ(50/52) HARQ(10/15) ARQ Cubic HARQ(10/12) HARQ(50/52) HARQ(10/15)
115 We have presented, in a specific scenario, the problem of reducing more often the congestion window in case of local errors with ARQ at the link layer and reducing the available bandwidth with HARQ-II. Also, when we consider a FTP application and a physical layer unit error ratio is lower than 10 2, TCP New Reno and TCP Compound do not achieve in using the whole available bandwidth. TCP Westwood, CUBIC and TCP Hybla succeed in optimizing the use of the bandwidth, as they were designed without link layer considerations and to overcome local errors problems. We illustrate that HARQ- II only exploits useful bandwidth when this mechanism is implemented. Also, the delay introduced by TCP Hybla is more important than the delay introduced by TCP Westwood and CUBIC. Based on these statements, we believe that the best combination we present would be to use CUBIC, TCP Westwood or TCP Hybla with ARQ at the link layer, when the physical layer unit error is under Interleaved Internet scenario In this section, we evaluate the impact of retransmission schemes on the performance of the transport protocols performance, in the Internet scenario detailed in section 4.2. The difference with the previous section is that the physical layer errors are more prevalent, and less capacity is available for each transmission. Our aim is to verify that the assumptions made in the previous section are still valid when the number of errors increases. We consider one FTP transmission of IP packets of 1500 bytes. The bandwidth is limited to 263 kbps, as we consider that the bandwidth is fairly shared between 10 users. We compare the performance of the different transport protocols with diverse retransmission schemes and when there is a physical layer interleaving of 36 ms. The main difference with the results presented in the previous section is that we consider high physical layer unit error rate. In Figure 4.14, we present the average throughput measured at the mobile receiver side. TCP Hybla shows very good performance, whatever the value of Es/N0 is. We explain this by the fact that TCP Hybla blasts packets, overestimates the optimal bandwidth achievable and the congestion window of this transport protocol is very important. As a result, this protocol could be a good candidate to transmit data over 4G links. If we consider the performance of other tested transport protocols, when the physical layer unit error rate is high (higher than 10 2, Figure 4.11), we note that there are impor- 97
116 tant benefits in terms of bandwidth that can be achieved when HARQ-II are introduced at the link layer. As an example, we focus at the results measured when CUBIC is introduced at the transport layer. At Es/N0 = 5 db, with ARQ, we measured an achieved throughput of 81 kbps, and with HARQ(10/12), of 140 kbps: introducing HARQ(10/12) at the link layer increases the goodput by 59 kbps. At Es/N0 = 6 db, with ARQ, we measure an achieved throughput of 153 kbps, and with HARQ(10/12), of 215 kbps: introducing HARQ(10/12) increases the goodput by 62 kbps. When there are less physical layer errors, we validate the assumption that when the capacity is fully exploited, transmitting redundancy packets with HARQ-II reduces the goodput, i.e. the available bandwidth. Indeed, when the transport layer protocol is Cubic, at Es/N0 = 8 db, with ARQ, we measure an achieved throughput of 258 kbps, and with HARQ(10/12), of 215 kbps. Introducing HARQ(10/12) reduces the goodput by 43 kbps. We propose to evaluate the behavior observed in the previous simulations by considering the transmission of 0.1 Mb (median Internet web page size 3,4 ) with different transport layer protocols, different reliability schemes and different transmission times (to consider different channel states). In Table 4.3, we present the time needed to transmit these data using the different simulation parameters: we ran 200 iterations and present the average value. As pointed out before, the impact of the value of Es/N0 severely impacts the transmission delay. If the transport protocol is not TCP Hybla and if the signal-to-noise ratio is low, we measure the benefits in terms of delay provided by the initial transmission of a FEC block with HARQ-II. We also validate the fact that these benefits are limited when Es/N 0 increases, and ARQ has better performance. In this section, we conclude that when the number of error increases at the physical layer, HARQ-II enables a significant improvement of the performance of transport layer protocols: we justify this by measuring the achievable throughput when FTP applications are considered and by measuring the delay needed to transmit a fixed amount of data. 3 According to Google Web Metrics, the median Internet web page size is 180 kb. More than 30 % of the web pages weight less than 100 kb. 4 See for instance 98
117 Throughput [Kbps] Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (a) TCP New Reno Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (b) TCP Compound Throughput [Kbps] Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (c) TCP Westwood Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (d) TCP Hybla 250 Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (e) Cubic Figure 4.14: Transport layer performance: impact of HARQ-II and ARQ when Es/N 0 decrease 4.5 Non Interleaved Internet Scenario In this section, we consider a scenario with no interleaving at the physical layer: the mobile user transmits data (FTP applications), with Cubic or TCP Hybla at the transport layer, with different retransmission schemes at the link layer. In Figure 4.15, we present the achievable throughput. The results will be compared to those presented in Figure 4.14, with interleaving at the physical layer of 36 ms duration. 99
118 Table 4.3: Time needed to transmit 0.1 Mb Transport layer Link Layer Es/N0 protocol reliability scheme 5 db 6 db 7 db 8 db ARQ TCP New Reno HARQ(10/12) HARQ(10/15) ARQ TCP Compound HARQ(10/12) HARQ(10/15) ARQ TCP Westwood HARQ(10/12) HARQ(10/15) ARQ TCP Hybla HARQ(10/12) HARQ(10/15) ARQ Cubic HARQ(10/12) HARQ(10/15) We can observe that the interleaving has no effect on TCP Hybla performance. Indeed, this results follows from the property of TCP Hybla that transmits IP packets by setting its congestion window at an estimated congestion window. With Cubic, when ARQ is introduced at the link layer, with interleaving, at Es/N0 = 8 db the achieved throughput is 258 kbps ; without interleaving, at Es/N0 = 8 db the achieved throughput is 111 kbps, and at Es/N 0 = 12 db the achieved throughput is 258 kbps. In this case, at equivalent Es/N0 level, interleaving enables a gain of 147 kpbs ; to achieve the same throughput, interleaving enables a gain of 4 db in Es/N0. With Cubic, when HARQ(10/12) is introduced at the link layer, with interleaving, at Es/N0 = 6 db the achieved throughput is 215 kbps ; without interleaving, at Es/N0 = 6 db the achieved throughput is 106 kbps, and at Es/N0 = 8 db the achieved throughput is 215 kbps. In this case, for an equivalent Es/N0 level, interleaving enables a gain of 109 kpbs ; to achieve the same throughput, interleaving enables a gain of 2 db in Es/N0. 100
119 250 Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (a) TCP Hybla 250 Throughput [Kbps] Es/N0 [db] ARQ HARQ10/12 HARQ50/52 (b) Cubic Figure 4.15: Transport layer performance, without physical layer interleaving 4.6 Discussion In this chapter, we present a study on the impact of link layer reliability schemes on the performance of transport layer protocols. We present a simulation setup that exploits 101
120 realistic physical layer traces comprising: Trace Manager Tool (TMT) computes the output of the link layer by implementing reliability schemes on them, and Cross-Layer InFormation Tool (CLIFT) enables NS-2 to read these traces. Considering physical layer traces enables realistic performance evaluation avoiding the dimensioning and modelization problems of the physical layer. This software has been developed with a specific use case in mind: link layer retransmission in 4G satellite links impacts on transport layer protocols. However, it can be also exploited for different applications and we published a study on the impact of link layer retransmissions in the aeronautical communications context [91]. When redundancy data is transmitted (with use of error correction codes), we show that there are less congestion window reductions, and an improved use of bandwidth for data transmission on a satellite link. However, if the channel capacity is reached (e.g., for an efficient transport layer protocol that is operating with a low physical layer bit-error rate), transmitting the redundancy data is counter productive; on the other hand, if the number of bit-errors is high (resulting in erasure events at the link layer), we show that HARQ-II can outperform ARQ at the link layer. In conditions where the physical channel error rate is high, Hybrid-ARQ results in the best performance for all TCP variants considered, with up to 22% improvements compared to other schemes. 102
121 103
122 104
123 Chapter 5 Channel access methods and TCP: on the choice of a channel access method for the home users of the return satellite channel of DVB? This chapter presents the study on the impact of channel access schemes on the performance of transport protocol in the context of DVB-RCS2. The latest specifications of DVB-RCS2 which are under validation do not statue if the satellite gateway should introduce a random or dedicated channel access method to distribute the capacity among the different home users. There is a need for experiments and interpretations of the difference between those methods and their impact on the end-to-end performance. We develop Physical Channel Access (PCA) a module in NS-2 [50] that considers random and dedicated access schemes to evaluate the end-to-end performance of TCP sessions. We highlight an interest for random access methods in [51]. The results presented in this chapter are under review for submission as a journal paper [52]. The rest of this chapter is organized as follows. In Section 5.1, we present a module, developed in NS-2, allowing to model the DVB-RCS2 channel access including experimental methods. Section 5.2 presents the dimensionning of the network. We propose to assess the global network performance, when multiple TCP sessions are introduced, to compare dedicated and random access methods in Section 5.3. In Section 5.4, we evaluate the time needed to transmit the first packets of a TCP session with different access methods. In Section 5.5, we propose an example on how mixing random and dedicated access methods is definitively of interest when the load of the network is low. We propose a discussion in Section
124 5.1 Physical Channel Access (PCA): modeling diverse link layer channel access methods in NS-2 The main idea behind PCA is to schedule the transmission of datagrams depending on the events that we model at lower layers. PCA enables to ease the implementation of lower layers but considering a realistic enough behaviour. We present the modelization process of the channel access in Section 5.1.1, the NS-2 module in Section 5.1.2, the definition of a simulation in Section and the extendability of PCA in Section Model the access The capacity is dynamically distributed between the different users on the time frequency frame whose structure is detailed in Figure 3.5. At the access point, transmission of a frame is scheduled every T F. We denote by N S the number of time slots available per frequency. The frequencies on which data is transmitted can be divided depending on the access method: F R frequencies are dedicated to the random access methods and F D are reserved to the dedicated access methods. In total, a frame can carry N S (F R + F D ) slots. The main idea is to model the delay necessary to access the channel. One datagram remains in the queue while the effective transmission has not been done by lower layers. Parts of the datagrams are sent every T F and the transmission at the network layer level is done once the whole packet has been transmitted by lower layers. PCA has been developed with DVB-RCS2 specifications in mind. We illustrate one implementation of the eventdriven approach in this context. Indeed, we model the distribution of the capacity between the different users based on the load on the network and different access strategies. The idea is to have enough realism and ease the implementation NS-2 module implementation details In order to emulate delays, PCA is implemented as a queueing policy. We inherit from the DropTail queue management scheme, of which our PCA sub-class redefines the methods 106
125 used to process the packets. Each node uses the enque() and deque() methods to add and remove packets from the queue. In Figure 5.1, we compare the enque() and deque() methods of DropTail and DropTail/PCA. With DropTail, when the enque() method adds packet P N+1, it is added at the end of the sending buffer and transmitted when P 1,..., P N have been transmitted with the deque() method. With DropTail/PCA, when a packet is enque()ed, it is also added to the sending buffer. However, depending on the access method introduced, only a subset of the datagram is considered sent with each frame. When the last byte of a datagram has been transmitted, deque(), which is called every T F (which we recall from Section is the period of a frame), removes the packets from the sending buffer and passes it along. We now detail the data structures and algorithm of PCA Data structures Our module implements linked-lists to store information about the current flows and their packets Packets list The packet list contains information about the different packets that have reached the access point node but have not been fully transmitted yet. Each packet is defined by: appl_id: identifier of the flow; pkt_seqno: sequence number in the flow; frame_in: frame number after which the data of the packet start to be transmitted; frame_out: frame number at which the last bit of the packet will be transmitted; bool_first_frame: boolean specifying if a connection needs to be established (first packet of the current terminal); bool_lost: boolean specifying whether the packet is lost; bool_rand: boolean specifying if the access method is random (bool_rand=1) or dedicated (bool_rand=0); 107
126 SENDERS ACCESS POINT SHARED LINK RECEIVERS DROPTAIL at T 1 ACCESS POINT BUFFER SHARED LINK P N P 3 P 2 P 1 f at T 2 : enque(p N+1 ) P 1 P 2 P N+1 P N P 3 P 2 P 1 T 1 T 2 T 3 at T 3 deque(p 1 ) deque(p 2 ) P N+1 PN P 3 PCA ACCESS POINT BUFFER SHARED LINK at T 1 P N P 3 P 2 P f at T 2 P N P 3 P 2 P at T 3 P N P 3 T 1 T 2 T 3 t deque(p 1 ) deque(p 2 ) Figure 5.1: Capacity allocation: enque() and deque() 108
127 bits_to_send: actual number of bits of the datagram that have not been sent yet; bits_next_frame: number of bits that will be sent at the next frame; remaining_slot_frame_appl_det: number of dedicated slots that remain for this packet s flow; remaining_slot_frame_appl_rnd: number of random slots that remains for the flow of the packet; used_slot_frame_appl_rnd: method. number of block for the current random access Terminals list This linked list is used to collect information relative to the currently active terminals. It tracks all the open connections and maintains information about the last transmitted datagrams. appl_id: identifier of the terminal; pkt_seq: sequence number of the last packet transmitted; last_time_out: time when the last packet of the given terminal has been sent enque() method The enque() method is called when the network layer passes a packet down to PCA. It registers the packet and its attributes for consideration in the capacity distribution process. Figure 5.2 summarises the operation of this function. Before enqueuing a packet, the method verifies whether the connection needs to be established and the terminal added in the terminals list. If the terminal is not new, the method checks whether the connection is still open. It then adds the packet with its characteristics to the packets list. If pkt_seqno=0, it sets the frame_in attribute of the packet (first frame where data from this packet starts being transmitted) depending on the access method. frame_out is initially set to +, and later updated by the adaptbitnextframe() method described in the next section. 109
128 enque(packet *p) YES Get application characteristics of p Application is new? NO add application in Application list YES Connection closed? NO fix frame_in attribute of p get the sequence number of p add p in packet list Figure 5.2: enque() method flowchart deque() method The deque() method emulates arrival of a new frame at the receiver. It is called by a timer every T F. It loops over the packet list, forwards the packet for which frame_out is the current frame to the receiving node and updates the transmission progress of the other datagrams by adjusting their attributes. Figure 5.3 details how the number of bits to transmit on the next frame is calculated in the adaptbitnextframe() function. This process has been introduced to take care of: fair distribution of the capacity with dedicated access methods; determination of erasure probability with random access methods (depending on the load of the link and methods performance as detailed in Table 3.3); adaptation of the packet transmission to ensure that flows send their packets in the order they have been received. To do so, adaptbitnextframe() updates the values of bits_next_frame, used_slot_frame_appl_rnd and frame_out as follows. At frame F, for each packet where frame_in< F, we compute B remaining =bits_to_send-bits_next_frame, which corresponds to the data which remains to be transmitted. If B remaining > 0, the packet is left in the queue; bits_next_frame, which corresponds to the data that will be transmitted at frame F + 1, is determined depending on the access method (as well as the number of slots 110
129 NO loop over packets packet transmits bits at the next frame: frame_in < current frame YES adapt frame_in [to remain over current frame] determine the number of slots available random access dedicated access adapt frame_out [not to send the packet at the next frame] use maximum number of available slots share the slots between different application adapt bits_next_frame bits_next_frame < bits_to_send YES adapt frame_out look for following applicative packet adapt frame_in of new packet adapt bits_to_send [depending on the remaining slots] Figure 5.3: adaptbitnextframe() function flowchart, used in the deque() method which it will use, used_slot_frame_appl_rnd) and bits_to_send is set to B remaining. If B remaining 0, frame_out is set to F +1; the next packet for that terminal is then found in the packet list, its frame_in is set to F and B remaining is subtracted from its bits_to_send Random access methods and drop events We detail in Section that the available capacity is shared every T F and the number of bits for each flow is determined by adaptbitnextframe() function. This function will, as well, determine if the datagram is lost or not, and adapt the value of the parameter bool_lost of the datagram. Section defines the notations: one datagram is divided into PLDUs and, with random access methods, each PLDU can be sent on one RA block. Based on the performance of the random access method introduced (as detailed in Table 3.3), the number of PLDUs on this RA block, we compute the error probability, P err [0; 1] that the receiver succeeds in recovering its PLDU after having received RA block. For each PLDU of the given RA block containing data from datagram D, we compute a random value r [0; 1]. If r < P err, the parameters bool_lost of D is set to 1 and the datagram will not be trans- 111
130 mitted to the receiver. The datagram remains in the queue: we make the assumption that the transmitter does not have the non-acknowledgement of this PLDU from the receiver Antenna limitations It is possible that some transmitters cannot send data on different frequencies at once. This limitation has to be considered when determining the maximum number of slots that a user is allowed to occupy on each frame. This is highly linked to the frame structure, and in this case, a flow can only use N S slots whatever the number of available frequencies. As an example, if N S = 40 slots and: F R = 0 & F D > 1 (dedicated access): a unique user can exploit N S = 40 slots; F R = 1 & F D = 0 (random access), N block = 3, N ra = 40: a unique user can exploit N S /N block = 13 slots Tcl scripts for PCA In this section, we present how to link tcl parameters with PCA internal parameters. The DropTail/PCA queuing policy is implemented in two files (pca.cc and pca.h) located in the queue/ sub-directory of the NS-2 source. We detail the content of these files here. The parameters are set following the standard NS-2 fashion: Queue/DropTail/PCA set <PARAMETER> <VALUE> The following parameters have to be specified prior to starting a simulation: cutconnect_: time after which the connection between the gateway and the user is closed (in seconds); esn0_: signal-to-noise ratio of the channel in db (for random access methods performance); switchaleadet_: sequence number at which the access method switches from random to dedicated; frameduration_ (T F ): duration of a frame; nbslotperfreq_ (N S ): number of time slots per frequency; 112
131 sizeslotrandom_ (N data ): useful number of bits that can be sent on one RA block (i.e., where random access methods are introduced); sizeslotdeter_ (N data ): useful number of bits for each time slots where dedicated access methods are introduced; rtt_: two-way link delay (in seconds); freqrandom_ (F R ): number of frequencies used for random access; nbfreqperrand_ ((F R N S )/N ra ): number of frequencies comprised in an RA block; freqdeter_ (F D ): number of frequencies used for dedicated access; maxthroughtput_: maximum authorized throughput for one given flow (in Mbps); nbslotrndfreqgroup_ (N block ): number of blocks a PLDU is split into for distribution in one RA block; boolantennalimit_: boolean whether one transmitter has one or F R +F D antennas. In order to introduce PCA, the link between two nodes N1 and N2 (N1 is the access point node) can then be defined as: $ns simplex-link $N1 $N2 $bandwidth [$rtt_ / 2] \ DropTail/PCA $random_access_file_performance where $random_access_file_performance is the name of the file containing information about random access performance laid out as in Table Limits and extendability of PCA PCA can be used to conduct large studies on (MF-)TDMA schemes, however it has some limitations. First, the performance of random access methods depends on the signal-tonoise ratio of the specific link between one receiver and the access point. It is currently assumed that this value is the same for all receivers, but this can be easily lifted by adapting the receiver-to-snr mapping code. Second, PCA does not consider prioritization between flows. Nonetheless, this could be achieved by flagging data units at higher layers and inspecting these flags in deque() and adaptbitnextframe() functions. 113
132 The development of this module has been driven by (MF-)TDMA specifications. However, it can easily be extended for other similar access methods (time and/or frequency multiplexing) by adapting the adaptbitnextframe() method to reflect the specific data scheduling scheme of the desired technique. Also, in the current implementation, it was considered that one flow could only send a limited amount of data per RA block. This quantity can be adjusted in the simulation parameters (through sizeslotrandom_ and nbslotrndfreqgroup_). We also detail Physical Channel Access (PCA) that implements one event-driven approach by modeling the behaviour of the protocols introduced at both physical and link layers. We developed PCA with DVB-RCS2 specifications in mind, however, PCA can easily be extended to conduct large studies on (MF-)TDMA schemes with minor modifications. 5.2 Access methods In this section, we propose to present the dimensions of the frame that defines on which frequency and when a user can transmit data. We also specify and justify the parameters used for each access method Parameters In our simulation, we consider a Multi-frequency time division multiple access (MF-TDMA) scheme where users packets are distributed over 100 carriers (i.e., 100 frequencies). Thus, a frame of 45 ms length is composed by slots. Therefore, 100 slots are grouped in a random access (RA) block composed by 2.5 frequencies. We base the choice of parameters on specifications defined in [28] and present them in Table 5.1. We provide more details about sizeslotrandom_ and sizeslotdeter_ in Section
133 Table 5.1: Use case simulation parameters Parameters Access method Dedicated Random Random (CRDSA) (MUSCA) cutconnect_ esn0_ switchaleadet_ 0 frameduration_ nbslotperfreq_ (N S ) sizeslotrandom_ (N data ) xx sizeslotdeter_ (N data ) 920 xx xx rtt_ freqrandom_ (F R ) nbfreqperrand_ freqdeter_ (F D ) maxthroughtput_ 1 Mbps 1 Mbps 1 Mbps nbslotrndfreqgroup_ (N block ) boolantennalimit_ Access methods Dedicated access Each slot of length 1.09 ms carries 536 symbols. In the simulations, we consider a clear sky scenario with a Signal-to-Noise-Ratio equal to 8.6 db. We assume that the users apply a code of rate R = 2/3 combined with 8PSK modulation to encode a packet of 920 information bits into a codeword of 1380 bits, i.e., 460 symbols. Due to the encapsulation at the physical layer level, the physical layer data unit is increased to 536 symbols Random access Users connecting to the satellite with random access generally use an operating point lower than for the dedicated access. In the rest of this chapter, we take a margin of 3.5 db. Thus, for the clear sky scenario, we consider that: (E s /N 0 ) random = (E s /N 0 ) dedicated 3.5 = 115
134 = 5 db. In systems using CRDSA at 5 db each user can apply error-correcting codes of rate R CRDSA = 2/3, associated with QPSK to encode a packet of 613 bits (597 information bits and 16 header bits) into a codeword of 920 bits, i.e. 460 symbols. The error-correcting code used is a turbo code. As detailed in [38], N block bursts of length about 530 symbols are then created. The number of generated bursts depends on the version of CRDSA. In this section, we study the performance of regular CRDSA-3 (N block = 3). The N block bursts are transmitted randomly into N block slots of an RA block. In the case where the random access method used is MuSCA, users encode a packet of 680 bits (594 information bits and 86 header bits) with a turbo code of rate 1/4 associated with QPSK modulation to create codewords of 1380 symbols. The codeword is split into N block = 3 parts to generate N block = 3 bursts (detailed in [35]) sent on time slots of the same RA block. Fig 5.4 depicts the performance in terms of packet loss ratio (PLR) depending on the number of packets transmitted per RA block by CRDSA-3 and MuSCA-3. We show that MuSCA supports more users without introducing errors, even though we do not consider the most recent version of CRDSA-3, nor optimized random access method. We compare random and dedicated access methods, without extensively compare the different random access methods. 1 Packet loss rate Number of packets sent on a RA block of 100 slots CRDSA MUSCA Figure 5.4: Packet loss rate for random access methods at 5dB 116
135 Comparison between dedicated and random access methods With dedicated access method, the channel is reserved to the user and it enables the NCC to chose an optimal modcod. As a result, the use of satellite link capacity is optimized. It follows that 1) the communication is reliable and 2) the throughput is maximal. With random access methods, there is no step of resource reservation request and thus reduce the delay of access to the link. The modcod can not be optimized, as the channel between the home user and the gateway is not known. We developed a tool that enables us to compare the impact of these access methods on the performance of transport layer protocols. Eventhough the spectrum efficiency is optimized with dedicated access methods, random access methods enables to reduce the time a session uses the ressource. We propose to evaluate and measure the different benefits of each access method. 5.3 Enabling random access methods for data traffic In this section, we measure whether transmitting data over random access methods is of interest. We introduce a variable number of TCP sessions and measure the efficiency of each transmission Problem presentation We consider the parameters detailed in Table 5.1, i.e., there are (F D +F R ) N S = = 4000 slots per frame. We also introduce the antenna limitations detailed in Section , which limits the maximum number of slots per TCP sessions to 13 for random access methods and 40 for dedicated access method. Moreover, the capacity is fairly shared between the N U users with dedicated access method, therefore the maximum number of slots per TCP sessions in this case is: min((f D + F R ) N S /N U, N S ) = N S min((f D + F R )/N U, 1). When N U (F D +F R ) (i.e., with our parameters, when N U 100), a TCP session can transmit N S /N block N data = 40/1 920 = bits per frame with a dedicated access, or 40/3 594 = 7722 bits with MuSCA as random access. It seems that dedicated access methods enable each TCP session to transmit more data per frame. However, when the 117
136 load of the network increases, (1) the maximum number of slots available per TCP session decreases with dedicated access methods, and (2) the error probability with random access increases. In this section, we compare both dedicated and random access methods to assess the impact of the load of the network on their performance Traffic generation We consider two nodes in NS-2. The first node, which represents the set of ST, transmits a variable number of TCP sessions to the second node, which acts as the gateway, and implements the module detailed in Section 5.1. We ensure data transmission lasts the complete simulation duration (20 s). The size of the IP datagrams is 1500 bytes, and the queue at the transmitter is large enough to prevent overflowing. We use the Linux implementation of TCP Cubic [22], with SACK options, as transport layer congestion control. With such traffic, we can investigate the problem presented in Section by measuring if there is a network load that makes the choice of random access methods more suitable to transmit data as compared to dedicated access methods Throughput and datagram loss rate We show in Figure 5.5 the average number of datagrams sent per TCP sessions. Additionnaly, Figure 5.6 gives the error probability by considering the number of datagrams dropped at the gateway level (i.e., those that can not be recovered by the receiver) and the number of datagrams successfully transmitted. Finally, to better assess the impact of the datagrams errors on the transmission efficiency, we show in Figure 5.7 the efficiency determined by the ratio between the average number of transmitted datagrams and the maximum number of datagrams that could have been transmitted without errors. From Figure 5.6, we see that datagram errors appear from N = 100 for CRDSA and N = 300 for MuSCA. This results in an efficiency (ratio between the average number of datagrams transmitted over the maximum number of datagrams that can be transmitted in 20 s per FTP session) of the transmission that decrease when the number of FTP sessions increases as illustrated in Figure 5.7. However, Figure 5.5 illustrates that when N 300, one TCP session transmits more datagrams with dedicated access method than with 118
137 Average number of datagram sent Number of FTP sessions Dedicated CRDSA MUSCA Figure 5.5: Average number of datagrams sent per TCP sessions in 20 s Datagram error probability Number of FTP sessions CRDSA MUSCA Figure 5.6: Average number of datagrams lost per TCP sessions in 20 s any random access methods. Moreover, when the load increases, the datagram errors increase consequently with random access methods and, as a result, the average number of datagrams sent per TCP session decrease. As an example, with MuSCA as access method, one TCP session transmits on average 50 datagrams, whereas with dedicated access, one TCP session transmits on average more than 350 datagrams. 119
138 1 Efficiency (%) Number of FTP sessions CRDSA MUSCA Figure 5.7: Transmission efficiency Discussion We measured through the NS-2 simulations that, even though the capacity of each TCP session decreases when the load of the network increases with dedicated access, these methods enable to transmit more data than random access schemes which lose an important number of datagrams: phenomena (1) of Section is better than phenomena (2). Indeed, we earlier derived that the maximum number of slots available per frame per TCP session is defined by N S min((f D + F R )/N U, 1). When N U (F R + F D ), a TCP session can transmit (F R + F D ) N S /N U N DE data bits per frame with a dedicated access, and N S /N block N RA data with MuSCA as a random access method. For a future random access to be viable, the number of users, N U (F R +F D ), that enables it to transmit more data verifies Equation (5.1). (F R + F D ) N S /N U N DE data N S /N block N RA data N U (F R + F D ) N S /N U Ndata DE (5.1) N S /N block Ndata RA Figure 5.8 illustrates the performance that a random access method should have to be more efficient than dedicated access methods when the load on the network is important. We denote by N MaxRA the number of users from which the random access methods starts to introduce errors (i.e., N MaxRA = 300 with MuSCA in our case). 120
139 Number of datagrams sent in XX sec dedicated access random access N MaxRA Nu futurist random access? Number of Users Figure 5.8: Illustration of N U With our parameters, the minimal desirable N U is such that N U 477 applying (5.1). With MuSCA as a random access method, these N U users would be equally spread among the 40 RA blocks. There would be N U 13/40 = /40 = 155 users per RA block, which MuSCA can not carry, as illustrated in Figure 5.4. As far as we know, there is no random access method that can carry traffic and verify (5.1). However, this equation can help to assess the load from which it is interesting to transmit some data with random access methods. Both NS-2 simulations and mathematical expressions illustrate that the transmission of data is more efficient with dedicated access methods, as random access methods enable to transmit less data on one given frame and errors might occur. However, the next section looks more closely at detailed performance of one given flow in order to explain the interest behind introducing random access methods to carry short data flows. 5.4 Transmission times of short flows Section 5.3 concludes that the transmission of data is more efficient with dedicated access methods. However, considering that (1) there is a connection delay introduced by dedicated access methods, (2) there is an important proportion of short flows in the Internet (measured in [39, 40]), we now study the benefits that random access methods can provide in terms of transmission delay for short flows when there are no errors (we focus on cases 121
140 when N < N MaxRA in Figure 5.8) TCP sessions We consider the same parameters as those presented in Section However, while all these TCP sessions were introduced between t = 0 s and t = 20 s, we now introduce the transmission of 30 kb (i.e., 20 datagrams) starting at t = 10 s. We use the Linux implementation of TCP Cubic [22], with SACK options, as the transport layer congestion control. We consider flows that do not lose datagrams when random access methods are introduced. In Figure 5.9, we plot the evolution of the TCP segment sequence for one typical flow decoded at the receiver side with 150 TCP sessions. We can see the progression of the congestion window of TCP in the slow start phase, with CWND and RTT presented in the figure. Overall, this figure illustrates that the RTT needed for the connection when dedicated access is involved delays the transmission of the first datagrams. We can also see that the time needed to transmit two datagrams is smaller with dedicated access (denoted T 2) than with random access (denoted T 1). As a result, with dedicated access, the progression of the congestion window is faster, but starts later. Sequence number RTT T1 T2 CWND Time [sec] Dedicated Access CRDSA MuSCA Figure 5.9: Evolution of TCP segment sequence number reception In Figure 5.10, we show the average time needed to transmit a certain number of datagrams, avgt ime for a flow that did not lose datagrams, with 150 users: for datagram 122
141 i rect ime(k) k=1 i, if rect ime(i) is the reception time of the datagram i, avgt ime(i) =. i Direct connection with random access methods enables to receive the first datagrams faster. The number of datagrams transmitted before dedicated access enables the reception of the same number of datagrams as random access methods depends on the number of users. We illustrate that the first 1500 bytes datagrams are transmitted quite earlier with random access methods. As an example, at t = 1.5 s, 8 datagrams have been received with dedicated access method, instead of 14 with MuSCA. At t = 2.7 s, both MuSCA, CRDSA and dedicated access have received 42 datagrams. Average time of reception [s] Sequence number of received datagram Dedicated Access CRDSA MuSCA Figure 5.10: Cumulated reception time In order to confirm the previous statement, we assessed the time needed to transmit 30 kb (i.e., 20 datagrams) under various conditions. The transfer of the file started after t = 10 seconds of simulation (while the competiting flows started at t = 0 seconds). We present the results in Table 5.2. When there are 200 TCP sessions, both CRDSA and MuSCA transmit the 30 kb faster by 90 ms than dedicated access. We confirm that when there are more TCP sessions the time needed to transmit 30 kb, with dedicated access methods, is even larger, i.e., resulting in lower throughput for each user. Conversely, the transmission of 20 datagrams is faster with random access methods (when there are no errors). We propose in the next section to validate this interpretation by considering a more realistic traffic model. 123
142 Table 5.2: Transmission times of 30 kb Number of Access method Reception date competiting flows first packet last packet 150 Dedicated CRDSA MuSCA Dedicated CRDSA MuSCA HTTP traffic with Packmime Packmime [92] is an NS-2 module that models HTTP traffic. It is controlled by a rate parameter, i.e., the average number of new connections that start each second. 1 This module enables us to model clients (ST) that send requests to the servers (satellite gateway). The transport layer protocol is TCP NewReno [18] with Sack options. The size of the requests generated by the clients is within [150;650] bytes and the rate is set to 500. We define the transmission time by the time between the transmission of the first request (SYN/SYN ACK of 40 bytes) from the client and the reception at the last request at the server. We present the results in Table 5.3 which show the mininum, median and maximum requests transmission times considering 2000 requests sent at random moments of the simulation by Packmime traffic generator and considering dedicated and random access methods (the performance of MuSCA and CRDSA are identical). Table 5.3: HTTP request transmission times Access method Transmission time (s) Minimum Median Maximum Dedicated access Random access This confirms that the transmission of the shortest requests is faster with random access methods than with dedicated access methods by introducing a specific HTTP traffic when requests are sort enough. 1 For more details, see: 124
143 5.4.3 Short flows and errors The conclusions from the previous section must be adjusted by evaluations on the impact of network load increase on the transmission time of short flows. We measure the transmission time of short flows when the load of the network is too important for random access methods to transmit data without error events (N > N MaxRA ). We measure the maximum transmission times of short flows when error events occur. The goal of this section is also to determine the minimum delay introduced by retransmissions depending on the number of datagrams and the state of the network. We consider that one user transmits D datagrams. We denote by P err (N) with N users the probability to lose a datagram (from Figure 5.6), and by = {d 1 ; d 2 ;... ; d D } the set of datagrams of one flow. P (r di = R) is the probability that the datagram d i is retransmitted R times and is determined by (5.2). R N, N N, d i, P (r di = R) = (1 P err (N)) P err (N) R (5.2) We determine the probability to have at least ˆR retransmissions for one datagram. Based on the fact that one retransmission increases the delay by at least one RT T, we propose a simple lower bound of the time needed to transmit a flow composed of several datagrams. As illustrated in Figure 5.11, we focus on the first datagrams of one TCP session, we do not consider the congestion avoidance. We compute a lower bound for the supplementary delay introduced by loss events. P ( ˆR) = D 1 k=0 ( ) D P (r < R) k P (r = R) D k k P ( ˆR) = D 1 k=0 ( ) ( D R 1 k i=0 P (r = i)) k P (r = R) D k (5.3) Using (5.2) and (5.3), we can propose some numerical evaluations in Table 5.4. We use the error probability depending on the number of users from Figure 5.6. This lower bound of the minimal delay introduced when retransmissions are needed and the probability for this event to occur. When N N MaxRA (i.e, when the load of the network starts to provoke error events), the probability for retransmission to happen can not be neglected. During the transmission of 3 datagrams, the probability for one of those to be lost can be up to 23% with CRDSA and 300 TCP sessions and provoke at least an 125
144 Table 5.4: Retransmission probabilities Number Retransmissions P( ˆR) of users number min. delay (s) (CRDSA) (MuSCA) e increase of 500 ms in transmission time. We showed in Figure 5.10 that the transmission of 3 datagrams without loss can be faster by 380 ms with random access methods: when a loss event occurs, the whole benefits provided by random access methods is lost. In NS-2, we consider the same parameters as those presented in Section However, while all these TCP sessions are introduced between t = 0 s and t = 20 s, we introduce the transmission of 4500 B (i.e., 3 datagrams) starting at t = 2 s. When there are 200 TCP sessions (resp. 300 TCP sessions), with CRDSA (resp. MuSCA) as random access method, over 250 runs, the maximum transmission time is s (resp s). 5.5 Mixing random and dedicated access methods We verify in Section 5.4 whether the transmission of the first packet of a flow is indeed faster with random access methods when the load of the network is low (i.e., N < N MaxRA ). We present in Figure 5.11 that depending on the load of the network, the benefits provided by random access vary. Indeed, when the load increase, the capacity is shared among the users with dedicated access and the time needed to transmit a certain amount of datagrams increase. Also, we quantify when datagrams errors can introduce a delay that ruins the benefits provided by the random access methods. We present in Figure 5.12, the switch that could be introduced to improve the transmission of short flows and long flows. The idea is to enable a faster transmission for the 126
145 Reception time N3 random access N1 & N2 users random access N3 users N2 users N1 users dedicated access dedicated access dedicated access N2>N1 N1<N MaxRA N2<N MaxRA N3>N MaxRA SEQ(N1) SEQ(N2) SEQ(N3) Datagram sequence number Figure 5.11: Datagrams errors and short flows first packet of one given TCP session by adaptating the choice of the access method. To increase the transmission of the first datagrams, they must be transmitted over random access methods, also when the network load is low enough for the random access methods not to introduce errors. Reception time N users random access N users dedicated access SEQ=(f(N)) Figure 5.12: Switch from random to dedicated access Datagram sequence number The results presented in the chapter let us argue that, for a given TCP session, switching 127
146 from random access to dedicated access is of interest. It increases the end user experiences of the service and the spectrum efficiency, as during connection establishment, the first data packets would already be sent on RA blocks. The authors of [93] present analytical experiments which measure the benefits for a given application of such switching between random and dedicated access. Their results require further discussions on these benefits with simulations and real world experiments. Moreover, this idea is presented in IP Over Satellite (IPOS) standards 2 : we argue for the integration of such strategy in the current DVB-RCS2 standards. We propose a capacity distribution that at t = k T F : estimate the load on the network for t [k T F ; (k + 1) T F ] and determine SEQ(N), the sequence number from which it is interesting to switch from random access method to dedicated access method for each flow; determine the number of RA block that we can introduce, considering that the transmission must be safe and that maximize the number of flows for which the sequence number of the datagram to sent is less than SEQ(N); affect capacity to all flows which current sequence number is greater than SEQ(N) with dedicated access. Based on the results presented in this section, we believe that this strategy can optimize the transmission of requests or transmissions of big s. Mixing random and dedicated access methods is definitively of interest when the load of the network is low. 5.6 Discussion In this chapter, we compare the impact of both dedicated and random access methods on the performance of TCP in the context of DVB-RCS2. The current standards do not define the channel access strategy of the return satellite link (transmission of datafrom home users to satellite gateways). We present an NS-2 module, PCA, that enables us to simulate the access to the DVB-RCS2 return link. We measure that the transmission of data is more efficient with dedicated access methods, as random access methods enable to transmit less data on one given frame and errors 2 Purchasable at the Telecommunications Industry Association (TIA), that published TIA 1008-A - IP over Satellite (IPOS) 128
147 might occur. However, we show that the transmission of the shortest files is faster with random access methods than with dedicated access methods. Considering the fact the errors can slow down the transmission of short files, we propose a capacity distribution that mixes both random and dedicated access methods depending on dynamic estimation of the load on the network and the sequence number of each TCP session. 129
148 130
149 131
150 Chapter 6 Leveraging queuing delays to introduce less-than-best-effort traffic on satellite path This chapter provides an analysis of the performance of Low Extra Delay Background Transport (LEDBAT), as a legitimate Less-than-Best Effort (LBE) method for background applications in the context of congested large bandwidth delay product (LBDP) networks. IETF recently published a RFC for LEDBAT, as a congestion control algorithm for LBE transmissions. The rationale is to explore the possibility to grab the unused 4G satellite links capacity with LEDBAT in order to carry non-commercial traffic. We show that depending on the fluctuation of the load, performance improvements could be obtained by properly setting internal parameters of LEDBAT. We generalize this evaluation over different congested LBDP networks and confirm that the target value might need to be adjusted to networks and traffic s characteristics [43]. The rest of this chapter is organised as follows. We justify that this protocol is an ideal candidate for LBE background transmissions in Section 6.1. We propose simulations in 4G satellite contexts in Section 6.2 where we show that LEDBAT s queuing time target has an impact on performance. We generalize this evaluation over different congested LBDP networks in Section 6.3 to further assess this impact. In Section 6.4, we propose a discussion on the portability of the results presented in this chapter. 132
151 6.1 LEDBAT versus TCP Vegas for LBE transmissions In this section, we analyze the performance of LEDBAT over an LBDP scenario. As TCP Vegas does not perform well when mixed with other TCP variants, it could be a good alternative candidate for transmitting LBE traffic. The objective of this section is therefore to justify that LEDBAT is a better candidate. We run simulations with NS-2 and use the LEDBAT module validated in [44]. We checked that this module has been developed in accordance with the RFC. We model an LBDP link in NS-2: the capacity is set to 10 Mbps and the path delay is set to 250 ms. We consider two competitive flows. We focus on the impact of the introduction of a secondary LBE flow (either TCP Vegas or LEDBAT) when the primary flow has reached full capacity. The primary flow transmits data for 800 s with CUBIC at the transport layer. The secondary flow starts from 500 s to 800 s. The DropTail queue size is considered as infinite (we fixed it to a large value), the size of the TCP segments is 1500 bytes. We present the combination of the different flows used in the simulation and their respective throughput in Table 6.1. For both flows, we report the mean throughput measured over the simulation period. When CUBIC is the only flow on the link, it occupies % of the capacity (Case 1). Introduction of the LEDBAT flow causes a 0.02% reduction of the capacity occupied by the CUBIC flow (Cases 3 4). We also note that TCP Vegas exploits 6% of the capacity (more than LEDBAT), but the percentage of the capacity occupied by CUBIC decreases by 5.8% (Case 2). We conclude that even if TCP Vegas takes up less capacity, this protocol shows more aggressiveness compared to LEDBAT. TCP Vegas is also more aggressive than LEDBAT in terms of link capacity utilization when they are the two protocols involved in the simulation (Case 5). Therefore, we believe that LEDBAT is a better candidate than TCP Vegas to transmit LBE traffic over long delay paths without introducing congestion nor severely affecting the other competing flows sharing the same path. The results gathered in Table 6.1 illustrates that for a LBDP link, the queuing target has an impact on the throughput of the CUBIC flow, when LEDBAT is introduced (Cases 3 4). In the following sections, we further explore the impact of this value in variously loaded satellite networks. 133
152 Table 6.1: Comparison of LEDBAT and Vegas fairness to CUBIC Transport Protocol Target (ms) Throughput (for LEDBAT) (% of capacity) Case 1 Flow 1: CUBIC Flow 2: NONE Case 2 Flow 1: CUBIC Flow 2: TCP Vegas Case 3 Flow 1: CUBIC Flow 2: LEDBAT Case 4 Flow 1: CUBIC Flow 2: LEDBAT Case 5 Flow 1: TCP Vegas Flow 2: LEDBAT LEDBAT over a 4G Satellite Network In this section, we explore the impact of the target queuing delay specifically focusing on the performance of LEDBAT in a 4G satellite network. We consider a mobile receiver and assess the performance of LEDBAT over the satellite G Satellite Network Configuration To drive this experiment, we use the NS-2 extension called Cross-Layer InFormation Tool (CLIFT) presented in Section 4.1 allowing to play real physical layer traces inside NS-2. The 4G satellite link trace used was provided by CNES. 1 The simulations for this scenario represent the communication between a single mobile user and a satellite gateway. We focus on CUBIC as it is now enabled by default in GNU/Linux and Android systems. The mobile user sends data to the satellite gateway using CUBIC or LEDBAT at the transport layer and retransmission mechanism (ARQ) at the link layer level. As before, the queue is large enough not to be overflowed, the size of the TCP segments is 1500 bytes. 1 CNES is a government agency responsible for shaping and implementing France s space policy in Europe, see 134
153 The physical trace are detailed in Section 4.2. We aim to study the impact of LEDBAT on competing CUBIC flows and its ability to exploit capacity when the network is not fully loaded. The simulation lasts 450 s. We consider that the mobile receiver transmits data with a CUBIC protocol from 0 s to 225 s and from 270 s to 450 s. From s to s, data is transmitted with a LEDBAT protocol. Based on LEDBAT s RFC [46], we consider a representative set of target values τ [5; 15; 25; 100] Simulation Results Table 6.2: LEDBAT over 4G Satellite τ Protocol Capacity (kbps) for a simulation period (in s) ms CUBIC LEDBAT ms CUBIC LEDBAT ms CUBIC LEDBAT ms CUBIC LEDBAT We present the results for this scenario in Table 6.2. When a CUBIC flow attempts to send data (t [112.5; 225] or [270; 337.5]), the LEDBAT flow does not manage to transmit data. When the primary flow does not transmit (t [225; 270]), LEDBAT flow uses this opportunity for its own traffic. The LEDBAT flow can not use the whole available capacity, due to its low aggressivity: after t = 225 s, there are still IP datagrams of the CUBIC flow in the queue that are waiting to be transmitted and between t [225; 270] there are not enough receiver s feedbacks for LEDBAT congestion control to fastly increase its congestion window. Also, the smaller the target queuing delay is, the more data the LEDBAT flow transmits during this less loaded period. 135
154 It is worth noting that due to the size of the queue at the sender side, some datagrams remain in the queue while the sender does not add any datagrams. This results in a capacity which should be null, but we measure that datagrams are transmitted. As a result, we consider that LEDBAT can be a very good candidate for LBE data transfer, using capacity when some is available but gracefully retracting when primary traffic is present. This also illustrates that the target queuing delay has an impact on the performance of LEDBAT. Decreasing this value allows LEDBAT to use the free capacity more efficiently. We propose, in the next section, to verify this statement and assess the impact of the target value in a more generic context, where the LBDP link is introduced in a loaded network. 6.3 LEDBAT performance in a loaded satellite network In this section, we assess the impact of the number of LEDBAT flows and their target queuing delay depending on the capacity of the satellite path left over by the primary traffic Network configuration As detailed in Fig. 6.1, we consider a simple architecture where the bottleneck is the satellite link. Three type of competitive flows transmit data to the Receiver 1. Each application is a file transfer using CUBIC as transport protocol. We consider L LEDBAT transmitters with L [1; 10; 25; 50] and the same set of τ [5; 15; 25; 100]. In order to assess how LEDBAT exploits the freed capacity when other transports reduce their rates, we need to introduce a limiting factor for congestion losses to occur. To do so, the queue at the gateway is fixed to 50 IP datagrams (i.e., maximum queuing delay 120 ms, which is higher than the target value). The size of the TCP segments is still 1500 bytes and the AQM mechanism is DropTail. The links between the different transmitters (Link 1 in the figure) are defined by a capacity of 5 Mbps and a random delay d1 [20;50] ms. The satellite link (Link 2 in the figure) has a capacity of 5 Mbps and a delay of 250 ms. The simulation lasts for 300 s. The load variations on the network are presented in Table 6.3. In order to avoid the late- 136
155 Group A Transmitter A_1 Transmitter A_X Group B Transmitter B_1 Transmitter B_Y Group C Transmitter C_1 LINK 1 LINK 2 Satellite Gateway Transmitter Ledbat Receiver 1 Receiver 2 Transmitter C_Z Transmitter Ledbat L Figure 6.1: Network architecture comers problems introduced by competing LEDBAT flows, all LEDBAT flows start at the same time. Also, we introduce different groups of CUBIC flows to obtain a controllable fluctuating traffic that enables us to better understand LEDBAT behavior. Table 6.3: Simulation Parameters Transmitter A1,... AX, B1,... BY, C1,... CZ Group Nb Flows Transmission times (s) Group A 100 [0;300] Group B 100 [0;30],[60;90],[180;210],[240;270] Group C 100 [0;75],[150;225] Group Ledbat [1;10;25;50] [0;90] 137
156 6.3.2 Presentation of the results: few users in the network We only consider transmitters from groups B and C (details in section 6.3.1). To better assess the performance of the flows, we compute the goodput measured at the end of the whole simulation (i.e., the amount of useful data transmitted). LEDBAT flows Used capacity (%) CUBIC flows Used capacity (%) Cumulative Used capacity (%) Value of the target (ms) C=5Mb/ 0 Ledbat C=5Mb/ 1 Ledbat C=5Mb/ 10 Ledbat C=5Mb/ 25 Ledbat C=5Mb/ 50 Ledbat Figure 6.2: Capacity sharing depending on the target value and the number of flows (without Group A) We present, in Fig. 6.2, the results looking at the percentage of the capacity exploited by the LEDBAT flows, the CUBIC flows and the overall utilized capacity. First of all, we can clearly evaluate that the introduction of one LEDBAT flow reduces the percentage of the capacity used by the CUBIC flows. Considering the fact that the LEDBAT flows introduces a decrease of the CUBIC s flows capacity (due to congestion), but also might greatly increase the utilization of the capacity of the link, we try to assess a trade-off between these two considerations depending on the target value and the number 138
157 of LEDBAT flows. Introducing flows in the network increases the utilization of the central link, but we focus on the fact that adapting the target value of LEDBAT enables to (1) send more LBE data and (2) less impact on the principal traffic. When the target queuing delay is increased and when the number of flows are fixed, we can see in this figure that (1) LEDBAT flows exploit less capacity to transmit data, (2) the capacity utilization of CUBIC flows decreases. When the value of the target changes from 100 ms to 5 ms and the number of LEDBAT flows is set to 50: (1) the capacity used by LEDBAT flows increases by 5 %; (2) the capacity used by CUBIC flows increases by 2 %; (3) the utilization of the link increases by 7 %. Therefore, considering 50 LEDBAT flows and the network configuration detailed above, changing the target value from 100 ms to 5 ms enables to increase the use of the capacity by 7 %. As in the previous paragraph, we considered a fixed number of flows, we consider the benefits and impacts of increasing the number of flows. We can also see that when the target queuing delay is set to 5 ms, the cost of 8 % of the capacity for the CUBIC flows can enable to introduce 50 LEDBAT flows that will exploit 28 % of the capacity. As a result, introducing 50 LEDBAT flows with a target value of 5 ms, the utilized capacity of the central link increases by 20 %. When the target queuing delay is set to 100 ms, the cost of 11 % of the capacity for the CUBIC flows can enable to introduce 50 LEDBAT flows that will exploit 22 % of the capacity. In this case, the utilized capacity increases by 11 %. Therefore, changing the target value from 100 ms to 5 ms (1) cost 6 % less of the principal flows capacity, (2) provides 6 % for LEDBAT flows, (3) increase the use of the central link by 9 %. We can conclude that, in the context of a high delay path, the introduction of LEDBAT flows is optimized when its target value is set to 5 ms. Setting this parameter to 5 ms enables to introduce a large number of LEDBAT flows by greatly increasing the capacity utilization of the long delay link at a low cost for the CUBIC flows Presentation of the results: fully loaded network In this section, we consider that all groups A, B and C (details in section 6.3.1) transmit data. The aim of this section is to assess if we can propose the same conclusions as the one presented in section when the long delay link is fully loaded. In Fig. 6.3, we present the results in terms of used capacity (presented in percentage of 139
158 LEDBAT flows Used capacity (%) CUBIC flows Used capacity (%) Cumulative Used capacity (%) Value of the target (ms) C=5Mb/ 0 Ledbat C=5Mb/ 1 Ledbat C=5Mb/ 10 Ledbat C=5Mb/ 25 Ledbat C=5Mb/ 50 Ledbat Figure 6.3: Capacity sharing depending on the target value and the number of flows (with Group A) the available capacity). We can assume that the capacity of the link is fully exploited by the CUBIC flows even if the network is highly loaded: this is due to congestion control. In this context, we can note that there is an impact of the target value. Indeed, when the target queuing delay is increased and the number of LEDBAT flows is fixed (1) more capacity is exploited by the LEDBAT flows, (2) this capacity used by LEDBAT flows is directly taken from the CUBIC flows capacity, (3) there are no significant benefits in terms of overall used link capacity. Indeed when the target value changes from 5 ms to 100 ms, the capacity used by 50 LEDBAT flows increases by 5 %, but the capacity used by the principal flows decreases by the same 5 %. Thus, the congestion provided by 50 LEDBAT flows has a negative effect on the capacity dedicated to the principal flows, but its impact is less when the target value is set to 5 ms. As a consequence in this loaded network with a high delay-bandwidth product link, 140
159 there is few remaining capacity that LEDBAT flows could exploit to transmit data. When the network is highly congested, introducing LEDBAT flows with a higher target queuing delay increase the congestion and do not increase the utilization of the capacity. The same amount of capacity is available and is shared between CUBIC and LEDBAT flows. The capacity that LEDBAT flows takes from the CUBIC flows increases when the target value increase. We thereby conclude that in a LBDP network, setting the target queuing delay to 5 ms is optimal. 6.4 Discussion We focused on the optimization of the capacity of 4G satellite link because of a lack of studies in this area. The LEDBAT algorithm has been developed to support transmission for LBE applications. We illustrated in section 6.1 why we believe LEDBAT could be a good candidate for LBE traffic when a long delay link is present in the network. We also showed that the target value parametrization should not be neglected as it has an impact on the performance. In sections 6.2 and 6.3, we illustrated that a trade-off must be found between (1) disturbing the primary traffic, (2) enabling LBE traffic and (3) increasing the use of the link capacity. When the network is fully loaded, LEDBAT is less aggressive when the target value is low, and impacts the capacity used by the principal flows less. In our simulations, LEDBAT did not exhibit fairness when the target queuing delay was more than 5 ms. Conversly, when some capacity remains on the high delay path, setting this parameter to 5 ms still enables to optimize the transmission of LEDBAT flows and the use of the whole capacity. While current implementations of LEDBAT use a fixed delay of 100 ms, we showed that introducing a large number of LEDBAT flows compromises the ultra fairness of LBE transport protocol. Reducing the target value improves LEDBAT performance while preserving its fairness. We also illustrated that the optimal parametrization is very dependent on the network characteristics and primary traffic. Therefore it seems ill-advised to rely on a fixed and static value for this parameter. We think that LEDBAT should consider the current network conditions to dynamically adapt this target value. Also considering the fact that the gain value impacts on LEDBAT aggressivness, thanks to the results presented in this chapter, we argue for extensive measurements to dynamicaly and conjointly adapt of the target and the gain value to check if LBE traffic can be introduced. 141
160 142
161 143
162 Chapter 7 Conclusion 144
163 Novel ideas proposing interactions between nodes, the increasing number of users with demanding requirements and the important quantity of data stored require tight understanding of the cross layering impacts of the protocols implemented at different layers of the protocol stack. Among all the metrics mobile end users have requirements on, latency matters. In order to tackle this issues in long delay paths, the results presented in this document attempt to help service providers to better integrate features on existing network architectures. In the context of satellite 4G links, we measure the impact of link layer retransmission schemes on the performance of various transport layer protocols. For DVB-RCS2 satellite links, we compared the performance of two access schemes (random and dedicated) to understand the way home users access to the satellite link for Web browsing or data transmission. We explore the feasibility of exploiting information from satellite gateways to introduce low priority traffic. The rationale is to grab the unused 4G satellite links capacity to carry non-commercial traffic. As a future work, we propose to investigate on the possibilities to dynamicaly adapt the internal parameters of LEDBAT to integrate less than best effort traffic on long delay path. To this end, a PhD student, Si-Quoc-Viet Trang, started in September 2012, based on the results we present in this document. With the help of Emmanuel Lochin, I have helped in the definition of context and contribute to the first reflections in this topic. On top of the work presented in this document, we ran simulations to participate to the current investigations around the Bufferbloat [47]. Bufferbloat is a phenomenon in a packet-switched network when excess buffering of packets inside the network causes high latency and jitter, as well as reducing the overall network throughput. This has been measured at the end user router level, and one existing solution is to implement CoDel [48]. CoDel is an Active Queue Management (AQM) which datagrams drop events is based on the time spent by datagrams in the queue that must not exceed 5 ms. In order to have a better understanding of the problem, we evaluate the impact of various AQM strategies on the end-to-end performance, depending on the transport protocol introduced. We confirmed the results presented in [49], which notice the fact that the delay introduced by the MAC layer should not be neglected as it impacts on those 5 ms. The experience obtained through this PHD on inter layering aspects let me have a better understanding of the problem, as any proposal should consider the delay introduced by low layers. 145
164 146
165 147
166 Appendix A List of Publications Published In [13], we have detailled the cross-validation of the Trace Manager Tool (TMT) used to implement link layer reliability schemes on physical layer traces. This work is presented in Section In [15], CLIFT and the measures obtained to measure the impact of link layer retransmission on the performance of TCP over 4G satellite links. This work is gathered in Chapter 4. In [91], we have published the results on the impact of link layer relibility schemes on the performance of TCP in the contexte of Aeronautical Communications. In [50], Physical Channel Access (PCA), that models link layer channel access in NS-2, is presented. This work is described in Section 5.1. In [51], we present measurements on the benefits of random access methods on the performance of TCP in the context of DVB-RCS2. This work is described in Sections 5.3 and 5.4. In [43], we explore the potential use of LEDBAT as a congestion control to introduce LBE traffic on long delay path. This work is presented in Chapter
167 Submitted In [14], we detail how Cross-Layer InFormation Tool (CLIFT) loads link layer traces in NS-2. This work is described in Section In [52], PCA and the interest for random access methods to carry traffic on DVB- RCS2 path is detailled. This work is presented in Chapter
168 150
169 151
170 Bibliography [1] M. Cheffena and F. Perez-Fontan, Channel Simulator for Land Mobile Satellite Channel Along Roadside Trees, IEEE Transactions on Antennas and Propagation, vol. 59, pp , [2] N. Celandroni and A. Gotta, Performance Analysis of Systematic Upper Layer FEC Codes and Interleaving in Land Mobile Satellite Channels, IEEE Transactions on Vehicular Technology, vol. 60, pp , [3] I. F. Akyildiz, D. M. Gutierrez-Estevez, and E. C. Reyes, The evolution to 4G cellular systems: LTE-Advanced, Physical Communication, Aug [4] A. Larmo, M. Lindstrom, M. Meyer, G. Pelletier, J. Torsner, and H. Wiemann, The LTE link-layer design, IEEE Communications Magazine, vol. 47, pp , april [5] N. Ewald and A. Kemp, Performance analysis of link-layer hybrid ARQ with finite buffer size, in IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pp. 1 5, sept [6] E. Berlekamp, R. Peile, and S. Pope, The application of error control to communications, IEEE Communications Magazine, vol. 25, pp , april [7] C.-F. Chiasserini and M. Meo, A reconfigurable protocol setting to improve TCP over wireless, IEEE Transactions on Vehicular Technology, vol. 51, pp , nov
171 [8] N. Celandroni, E. Ferro, G. Giambene, and M. Marandola, SAT01-3: TCP Performance in a Hybrid Satellite Network by using ACM and ARQ, in IEEE Global Telecommunications Conference (GLOBECOM), pp. 1 6, dec [9] A. Chockalingam, M. Zorzi, and V. Tralli, Wireless TCP performance with link layer FEC/ARQ, in IEEE International Conference on Communications (ICC), vol. 2, pp vol.2, [10] C. Liu and E. Modiano, On the performance of additive increase multiplicative decrease (AIMD) protocols in hybrid space-terrestrial networks, Comput. Netw. ISDN Syst., vol. 47, pp , Apr [11] C. Barakat and A. Al Fawal, Analysis Of Link-Level Hybrid FEC/ARQ-SR For Wireless Links and Long-Lived TCP traffic, Rapport de recherche RR-4752, INRIA, [12] S. Sorour and S. Valaee, A network coded ARQ protocol for broadcast streaming over hybrid satellite systems, in IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pp , sept [13] N. Kuhn, E. Lochin, J. Lacan, R. Boreli, C. Bes, and L. Clarac, Enabling realistic cross-layer analysis based on satellite physical layer traces., in 23 rd IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2012), pp , [14] N. Kuhn, E. Lochin, J. Lacan, R. Boreli, C. Bes, and L. Clarac, CLIFT: a cross-layer information tool to perform cross-layer analysis based on real physical traces, CoRR, vol. abs/ , [15] N. Kuhn, E. Lochin, J. Lacan, R. Boreli, and L. Clarac, On the impact of link layer retransmission schemes on tcp over 4g satellite links, International Journal of Satellite Communications and Networking, [16] ITU global standard for international mobile telecommunications, IMT-advanced, ITU-R, [17] S. Lin and D. J. Costello, Error control coding: fundamentals and applications, vol. Chapter 15. Ed. Prentice-Hall,
172 [18] M. Allman, V. Paxson, and E. Blanton, TCP Congestion Control, RFC 5681, RFC Editor, Fremont, CA, USA, Sept [19] S. Mascolo, C. Casetti, M. Gerla, M. Y. Sanadidi, and R. Wang, TCP westwood: Bandwidth estimation for enhanced transport over wireless links, in Proceedings of the 7th annual international conference on Mobile computing and networking, MobiCom 01, (New York, NY, USA), pp , ACM, [20] S. Low, L. Peterson, and L. Wang, Understanding TCP Vegas: Theory and Practice, TR , Princeton University, Feb [21] C. Caini and R. Firrincieli, Tcp hybla: a tcp enhancement for heterogeneous networks, International journal of satellite communications and networking, vol. 22, [22] S. Ha, I. Rhee, and L. Xu, CUBIC: a new TCP-friendly high-speed TCP variant, SIGOPS Oper. Syst. Rev., vol. 42, pp , July [23] K. Tan, J. Song, Q. Zhang, and M. Sridharan, Compound TCP: A Scalable and TCP- Friendly Congestion Control for High-speed Networks, in 4th International Workshop on Protocols for Fast Long-Distance Networks (PFLDNet), [24] G. Judd and P. Steenkiste, Repeatable and realistic wireless experimentation through physical emulation, SIGCOMM Comput. Commun. Rev., vol. 34, pp , Jan [25] J. Mittag, S. Papanastasiou, H. Hartenstein, and E. Strom, Enabling Accurate Cross- Layer PHY/MAC/NET Simulation Studies of Vehicular Communication Networks, Proceedings of the IEEE, vol. 99, no. 7, pp , [26] W. Chauvet, C. Amiot-Bazile, and J. Lacan, Prediction of performance of the DVB- SH system relying on Mutual Information, in Advanced satellite multimedia systems conference (asma) and the 11th signal processing for space communications workshop (spsc), th, pp , sept [27] Second generation DVB interactive satellite system; part 1: Overview and system level specification, in Digital Video Broadcasting (DVB) TS ,
173 [28] Second generation DVB interactive satellite system; part 2: Lower layers for satellite standard, in Digital Video Broadcasting (DVB) ETSI EN V1.1.1, [29] Second generation dvb interactive satellite system (dvb-rcs2); part 3: Higher layers satellite specification, in Digital Video Broadcasting (DVB) TS , [30] C. Roseti and E. Kristiansen, TCP behaviour in a DVB-RCS environment, in International Communications Satellite Systems Conferences (ICSSC), jun [31] M. Luglio, F. Zampognaro, T. Morell, and F. Vieira, Joint DAMA-TCP protocol optimization through multiple cross layer interactions in DVB RCS scenario, in International Workshop on Satellite and Space Communications (IWSSC), pp , sept [32] F. Belli, M. Luglio, C. Roseti, and F. Zampognaro, Evaluation of TCP performance over emulated DVB-RCS scenario with multiple RCSTs, in International Workshop on Satellite and Space Communications (IWSSC), pp , sept [33] A. Tambuwal, R. Secchi, and G. Fairhurst, Exploration of random access in DVB- RCS, in PostGraduate Symposium on the Convergence of Telecommunications, Networking and Broadcasting (PGNET), [34] N. Celandroni, F. Davoli, E. Ferro, and A. Gotta, Employing contention resolution random access schemes for elastic traffic on satellite channels, in 18th Ka and Broadband Communications Navigation and Earth Observation Conference, pp , sept [35] H. C. Bui, J. Lacan, and M.-L. Boucheret, An enhanced multiple random access scheme for satellite communications, in Wireless Telecommunications Symposium (WTS), (London, UK), [36] N. Abramson, The ALOHA system: another alternative for computer communications, in Proceedings of the November 17-19, 1970, fall joint computer conference, AFIPS 70 (Fall), (New York, NY, USA), pp , ACM, [37] G. Choudhury and S. Rappaport, Diversity ALOHA a random access scheme for satellite communications, in IEEE Transactions on Communications, vol. 31, pp , mar
174 [38] E. Casini, R. De Gaudenzi, and O. Herrero, Contention resolution diversity slotted ALOHA (CRDSA): An enhanced random access schemefor satellite access packet networks, in IEEE Transactions on Wireless Communications, vol. 6, pp , april [39] D. Ciullo, M. Mellia, and M. Meo, Two schemes to reduce latency in short lived TCP flows, in IEEE Communications Letters, vol. 13, pp , october [40] C. Labovitz, S. Iekel-Johnson, D. McPherson, J. Oberheide, F. Jahanian, and M. Karir, Atlas internet observatory annual report, in 47th NANOG, [41] M. Welzl and D. Ros, A survey of Lower-than-Best-Effort Transport Protocols, RFC 6297, RFC Editor, June [42] A. Sathiaseelan and J. Crowcroft, The free Internet: a distant mirage or near reality?, Technical Report 814, University of Cambridge. [43] N. Kuhn, O. Mehani, A. Sathiaseelan, and E. Lochin, Less-than-best-effort capacity sharing over high bdp networks with ledbat, in IEEE 78th Vehicular Technology Conference (VTC2013-Fall), [44] D. Rossi, C. Testa, S. Valenti, and L. Muscariello, LEDBAT: The New BitTorrent Congestion Control Protocol, in Proceedings of 19th International Conference on Computer Communications and Networks (ICCCN), aug [45] G. Carofiglio, L. Muscariello, D. Rossi, C. Testa, and S. Valenti, Rethinking low extra delay background transport protocols, CoRR, vol. abs/ , [46] S. Shalunov, G. Hazel, J. Iyengar, and M. Kuehlewind, Low extra delay background transport (LEDBAT), RFC 6817, RFC Editor, Dec [47] CACM Staff, Bufferbloat: what s wrong with the internet?, Commun. ACM, vol. 55, pp , Feb [48] V. J. Kathleen Nichols, Controlling queue delay, ACM QUEUE, [49] C. Greg White, Active Queue Management Algorithms for DOCSIS 3.0: A Simulation Study of CoDel, SFQ-CoDel and PIE in DOCSIS 3.0 Networks, Cable Television Laboratories,
175 [50] N. Kuhn, O. Mehani, H.-C. Bui, J. Lacan, J. Radzik, and E. Lochin, Physical Channel Access (PCA): Time and Frequency Access Methods Simulation in NS-2, in 5th International Conference on Personal Satellite Services (PSATS), [51] N. Kuhn, H. C. Bui, J. Lacan, J. Radzik, and E. Lochin, On the benefits of random access methods on TCP performance over DVB-RCS2, in ACM MobiCom Workshop on Lowest Cost Denominator Networking for Universal Access (LCDNet), [52] N. Kuhn, O. Mehani, H.-C. Bui, E. Lochin, J. Lacan, J. Radzik, and R. Boreli, Choosing the right access method for TCP over DVB-RCS2: a simulation study, Under Review, [53] R. G. Gallager, Low-density parity-check codes, [54] C. Berrou and A. Glavieux, Turbo Codes. John Wiley & Sons, Inc., [55] J. Kurose and K. Ross, A top-down Approach, in Computer Networking, p. 284, Ed. Addison Wesley, [56] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow, TCP Selective Acknowledgement Options, RFC 2018, RFC Editor, Oct [57] N. Dukkipati, M. Mathis, Y. Cheng, and M. Ghobadi, Proportional Rate Reduction for TCP, in Proceedings of the 11th ACM SIGCOMM Conference on Internet Measurement, [58] H. Jung, S. gyu Kim, H. Yeom, S. Kang, and L. Libman, Adaptive delay-based congestion control for high bandwidth-delay product networks, in IEEE Proceedings INFOCOM, pp , april [59] N. Dukkipati, T. Refice, Y. Cheng, J. Chu, T. Herbert, A. Agarwal, A. Jain, and N. Sutin, An argument for increasing TCP s initial congestion window, SIGCOMM Comput. Commun. Rev., vol. 40, pp , June [60] R. E. Sheriff and Y. Hu, Mobile Satellite Communication Networks, vol. Chapter 5: Radio Link Design. John Wiley & Sons, [61] J. Pan, A Survey of Network Simulation Tools: Current Status and Future Developments, tech. rep., Nov
176 [62] R.-B. Saba Siraj, Ajay Kumar Gupta, Network simulation tools survey, tech. rep., [63] VINT Project, The ns Manual (formerly ns Notes and Documentation), Jan [64] E. Weingartner, H. vom Lehn, and K. Wehrle, A performance comparison of recent network simulators, in IEEE International Conference on Communications (ICC), pp. 1 5, june [65] E. Lochin, T. Pérennou, and L. Dairaine, When should I use network emulation?, Annals of Telecommunications, vol. 67, pp , [66] A. Gurtov and S. Floyd, Modeling wireless links for transport protocols, SIGCOMM Comput. Commun. Rev., vol. 34, pp , Apr [67] WiMAX Forum Mobile Radio Specifications Release 2 DRAFT-T R020v01-H Working Draft, [68] Introducing LTE-Advanced pg. 6, July [69] T. Nakamura, S. Abeta, M. Iwamura, T. Abe, and M. Tanno, Overview of LTE- Advanced and Standardization Trends, NTT DOCOMO Technical Journal, vol. Vol. 12, [70] K. Kotuliaková, D. Šimlaštíková, and J. Polec, Analysis of ARQ schemes, Telecommunication Systems, pp. 1 6, [71] L. Badia, M. Levorato, and M. Zorzi, Markov analysis of selective repeat type II hybrid ARQ using block codes, IEEE Transactions on Communications, vol. 56, pp , sept [72] R. Wang, T. Taleb, A. Jamalipour, and B. Sun, Protocols for reliable data transport in space internet, IEEE Communications Surveys Tutorials, vol. 11, pp , quarter [73] J. Mittag, S. Papanastasiou, H. Hartenstein, and E. Strom, Enabling Accurate Cross- Layer PHY/MAC/NET Simulation Studies of Vehicular Communication Networks, Proceedings of the IEEE, vol. 99, pp , july
177 [74] S. Alfredsson, A. Brunstrom, and M. Sternad, Transport protocol performance over 4G links: Emulation methodology and results, in in Proceedings of ISWCS 06, [75] M. Welzl, A. Abfalterer, and S. Gjessing, XCP vs. CUBIC with Quick-Start: Observations on Implicit vs. Explicit Feedback for Congestion Control, in IEEE International Conference on Communications (ICC), pp. 1 6, june [76] D. Kliazovich and F. Graneill, A cross-layer scheme for TCP performance improvement in wireless LANs, in IEEE Global Telecommunications Conference (GLOBE- COM), vol. 2, pp Vol.2, nov.-3 dec [77] E. Faulkner, A. Worthen, J. Schodorf, and J. Choi, Interactions between TCP and link layer protocols on mobile satellite links, in IEEE Military Communications Conference (MILCOM), vol. 1, pp Vol. 1, oct.-3 nov [78] Q. Wang and D. Yuan, Improving TCP Performance Using Cross-Layer Feedback in Wireless LANs, in 6th International Conference on Wireless Communications Networking and Mobile Computing (WiCOM), pp. 1 4, sept [79] H. Balakrishnan, V. N. Padmanabhan, S. Seshan, and R. H. Katz, A comparison of mechanisms for improving TCP performance over wireless links, IEEE/ACM Trans. Netw., vol. 5, pp , Dec [80] H. Cruickshank, R. Mort, G. Giambene, and M. Berioli, BSM integrated PEP with cross-layer improvements, in International Workshop on Satellite and Space Communications (IWSSC), pp , sept [81] E. Rendon-Morales, J. Mata-Diaz, J. Alins, J. Munoz, and O. Esparza, Cross-layer architecture for TCP splitting in the return channel over satellite networks, in 6th International Symposium on Wireless Communication Systems (ISWCS), pp , sept [82] O. Herrero, R. Gaudenzi, and J. Vidal, Design guidelines for advanced random access protocols, in International Conference on Satellite and Space Communications (ICSSC), [83] T. Gayraud, L. Bertaux, and P. Berthou, A NS-2 Simulation Model of DVB-S2/RCS Satellite Network, in 15th Kaband Conference, (Italy), p. 1, Sept
178 [84] R. Secchi, DVB-RCS(2) for ns-2, technical report, University of Aberdeen, [85] N. Celandroni and R. Secchi, Suitability of dama and contention-based satellite access schemes for tcp traffic in mobile dvb-rcs, IEEE Transactions on Vehicular Technology, vol. 58, pp , may [86] QBSS, Internet2 QBone initiative, tech. rep. [87] M. Arumaithurai, X. Fu, and K. K. Ramakrishnan, NF-TCP: a network friendly TCP variant for background delay-insensitive applications, in Proceedings of the 10th international IFIP TC 6 conference on Networking - Volume Part II, [88] G. Carofiglio, L. Muscariello, D. Rossi, and S. Valenti, The Quest for LEDBAT Fairness, in GLOBECOM 10, pp. 1 6, [89] A. Abu and S. Gordon, Impact of Delay Variability on LEDBAT Performance, in IEEE International Conference on Advanced Information Networking and Applications (AINA), march [90] J. Yee and J. Weldon, E.J., Evaluation of the performance of error-correcting codes on a Gilbert channel, IEEE Transactions on Communications, vol. 43, pp , aug [91] N. Kuhn, N. Van Wambeke, M. Gineste, B. Gadat, E. Lochin, and J. Lacan, On the impact of link layer retransmissions on TCP for aeronautical communications, in 5th International Conference on Personal Satellite Services (PSATS), [92] J. Cao, W. Cleveland, Y. Gao, K. Jeffay, F. Smith, and M. Weigle, Stochastic models for generating synthetic http source traffic, in Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies (INFOCOM), vol. 3, pp vol.3, march [93] C. Kissling and A. Munari, On the Integration of Random Access and DAMA Channels for the Return Link of Satellite Networks, in IEEE International Conference on Communications (ICC), june
M1 Informatique, Réseaux Cours 9 : Réseaux pour le multimédia
M1 Informatique, Réseaux Cours 9 : Réseaux pour le multimédia Olivier Togni Université de Bourgogne, IEM/LE2I Bureau G206 [email protected] 24 mars 2015 2 de 24 M1 Informatique, Réseaux Cours
Prototype de canal caché dans le DNS
Manuscrit auteur, publié dans "Colloque Francophone sur l Ingénierie des Protocoles (CFIP), Les Arcs : France (2008)" Prototype de canal caché dans le DNS Lucas Nussbaum et Olivier Richard Laboratoire
Systèmes et Réseaux (ASR 2) - Notes de cours Cours 14
Systèmes et Réseaux (ASR ) - Notes de cours Cours Anne Benoit May, 0 PARTIE : Systèmes PARTIE : Réseaux Architecture des réseaux de communication La couche -liaison La couche -réseau Algorithmes de routage
Plan. Programmation Internet Cours 3. Organismes de standardisation
Plan Programmation Internet Cours 3 Kim Nguy ên http://www.lri.fr/~kn 1. Système d exploitation 2. Réseau et Internet 2.1 Principes des réseaux 2.2 TCP/IP 2.3 Adresses, routage, DNS 30 septembre 2013 1
THÈSE. présentée à TÉLÉCOM PARISTECH. pour obtenir le grade de. DOCTEUR de TÉLÉCOM PARISTECH. Mention Informatique et Réseaux. par.
École Doctorale d Informatique, Télécommunications et Électronique de Paris THÈSE présentée à TÉLÉCOM PARISTECH pour obtenir le grade de DOCTEUR de TÉLÉCOM PARISTECH Mention Informatique et Réseaux par
Introduction. Adresses
Architecture TCP/IP Introduction ITC7-2: Cours IP ESIREM Infotronique Olivier Togni, LE2I (038039)3887 [email protected] 27 février 2008 L Internet est basé sur l architecture TCP/IP du nom
Internet et Multimédia Exercices: flux multimédia
Internet et Multimédia Exercices: flux multimédia P. Bakowski [email protected] Applications et flux multi-média média applications transport P. Bakowski 2 Applications et flux multi-média média applications
REMOTE DATA ACQUISITION OF EMBEDDED SYSTEMS USING INTERNET TECHNOLOGIES: A ROLE-BASED GENERIC SYSTEM SPECIFICATION
REMOTE DATA ACQUISITION OF EMBEDDED SYSTEMS USING INTERNET TECHNOLOGIES: A ROLE-BASED GENERIC SYSTEM SPECIFICATION THÈSE N O 2388 (2001) PRÉSENTÉE AU DÉPARTEMENT D'INFORMATIQUE ÉCOLE POLYTECHNIQUE FÉDÉRALE
Teste et mesure vos réseaux et vos applicatifs en toute indépendance
Teste et mesure vos réseaux et vos applicatifs en toute indépendance 2013 J3TEL en quelques minutes Groupe HBG en bref : Siège social à Paris 1100 employés dans 6 pays 150 M d de CA en 2012 Des activités
SIP. Plan. Introduction Architecture SIP Messages SIP Exemples d établissement de session Enregistrement
SIP Nguyen Thi Mai Trang LIP6/PHARE [email protected] UPMC - M2 Réseaux - UE PTEL 1 Plan Introduction Architecture SIP Messages SIP Exemples d établissement de session Enregistrement UPMC -
Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition. Jim Kurose, Keith Ross Addison-Wesley, July 2002. ENPC.
Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition. Jim Kurose, Keith Ross Addison-Wesley, July 2002. Réseau 1 Architecture générale Couche : IP et le routage Couche : TCP et
Travail d évaluation personnelle UV valeur C : IRE. Planification de réseaux : Simulateur IT-GURU Academic Edition
Travail d évaluation personnelle UV valeur C : IRE Planification de réseaux : Simulateur IT-GURU Academic Edition 25 mai 2005 Objectif de l exercice d évaluation personnelle : 1. Observer le partage de
I>~I.J 4j1.bJ1UlJ ~..;W:i 1U
~I ~~I ~ ~WI~I ~WI ~~'~.M ~ o~wj' js'~' ~ ~JA1ol..;l.:w I>~I.J 4j1.bJ1UlJ ~..;W:i 1U Exercice 1: Le modele TCP/IP est traditionnellement considere comme une pile de 5 couches. Pour chaque couche, expliquer
18 TCP Les protocoles de domaines d applications
18 TCP Les protocoles de domaines d applications Objectifs 18.1 Introduction Connaître les différentes catégories d applications et de protocoles de domaines d applications. Connaître les principaux protocoles
RESEAUX TCP/IP: NOTIONS AVANCEES. Preparé par Alberto EscuderoPascual
RESEAUX TCP/IP: NOTIONS AVANCEES Preparé par Alberto EscuderoPascual Objectifs... Répondre aux questions: Quelles aspects des réseaux IP peut affecter les performances d un réseau Wi Fi? Quelles sont les
NIMBUS TRAINING. Administration de Citrix NetScaler 10. Déscription : Objectifs. Publics. Durée. Pré-requis. Programme de cette formation
Administration de Citrix NetScaler 10 Déscription : Cette formation aux concepts de base et avancés sur NetScaler 10 permet la mise en oeuvre, la configuration, la sécurisation, le contrôle, l optimisation
Les Réseaux sans fils : IEEE 802.11. F. Nolot
Les Réseaux sans fils : IEEE 802.11 F. Nolot 1 Les Réseaux sans fils : IEEE 802.11 Historique F. Nolot 2 Historique 1er norme publiée en 1997 Débit jusque 2 Mb/s En 1998, norme 802.11b, commercialement
Le Multicast. A Guyancourt le 16-08-2012
Le Multicast A Guyancourt le 16-08-2012 Le MULTICAST Définition: On entend par Multicast le fait de communiquer simultanément avec un groupe d ordinateurs identifiés par une adresse spécifique (adresse
La Qualité de Service le la Voix sur IP. Principes et Assurance. 5WVOIP rev E
La Qualité de Service le la Voix sur IP Principes et Assurance 5WVOIP rev E Introduction La généralisation des infrastructures IP dans les entreprises s accompagne du développement de techniques d amélioration
L3 informatique Réseaux : Configuration d une interface réseau
L3 informatique Réseaux : Configuration d une interface réseau Sovanna Tan Septembre 2009 Révision septembre 2012 1/23 Sovanna Tan Configuration d une interface réseau Plan 1 Introduction aux réseaux 2
TESTING NETWORK HARDWARE
Guillaume BARROT Nicolas BAYLE TESTING NETWORK HARDWARE The forward-plane way www.jaguar-network.com Agenda Tester : inconvénients / avantages Des solutions de tests adaptées RFC 2544 / Quick Test Le forwarding
Instructions Mozilla Thunderbird Page 1
Instructions Mozilla Thunderbird Page 1 Instructions Mozilla Thunderbird Ce manuel est écrit pour les utilisateurs qui font déjà configurer un compte de courrier électronique dans Mozilla Thunderbird et
Rapport du projet Qualité de Service
Tim Autin Master 2 TI Rapport du projet Qualité de Service UE Réseaux Haut Débit et Qualité de Service Enseignant : Congduc Pham Sommaire Introduction... 3 Scénario... 3 Présentation... 3 Problématique...
Chapitre 1. Introduction aux applications multimédia. 1. Introduction. Définitions des concepts liés au Multimédia (1/2)
Chapitre 1 Introduction aux applications multimédia 1 1. Introduction Définitions des concepts liés au Multimédia (1/2) Multi Multimédia Média Multi : indique plusieurs Média : moyen/support de diffusion,
Conception d un outil d aide au déploiement d un réseau EV-DO dans un concept IMS pour l opérateur CAMTEL
Conception d un outil d aide au déploiement d un réseau EV-DO dans un concept IMS pour l opérateur CAMTEL L outil à développer devra donner la possibilité de planifier tout d abord un réseau EV-DO Rev
Algorithmique des Systèmes Répartis Protocoles de Communications
Algorithmique des Systèmes Répartis Protocoles de Communications Master Informatique Dominique Méry Université de Lorraine 1 er avril 2014 1 / 70 Plan Communications entre processus Observation et modélisation
Joint AAL Information and Networking Day. 21 mars 2012
Joint AAL Information and Networking Day 21 mars 2012 Présentation TéSA a pour objet : la mise en commun de moyens permettant de : rassembler et accueillir des doctorants ou post-doctorants dont les thèses
Principe de symétrisation pour la construction d un test adaptatif
Principe de symétrisation pour la construction d un test adaptatif Cécile Durot 1 & Yves Rozenholc 2 1 UFR SEGMI, Université Paris Ouest Nanterre La Défense, France, [email protected] 2 Université
Réseaux M2 CCI SIRR. Introduction / Généralités
Réseaux M2 CCI SIRR Introduction / Généralités Isabelle Guérin Lassous [email protected] http://perso.ens-lyon.fr/isabelle.guerin-lassous 1 Objectifs Connaissances générales sur les réseaux
WEB page builder and server for SCADA applications usable from a WEB navigator
Générateur de pages WEB et serveur pour supervision accessible à partir d un navigateur WEB WEB page builder and server for SCADA applications usable from a WEB navigator opyright 2007 IRAI Manual Manuel
ADSL. Étude d une LiveBox. 1. Environnement de la LiveBox TMRIM 2 EME TRIMESTRE LP CHATEAU BLANC 45120 CHALETTE/LOING NIVEAU :
LP CHATEAU BLANC 45120 CHALETTE/LOING THEME : ADSL BAC PROFESSIONNEL MICRO- INFORMATIQUE ET RESEAUX : INSTALLATION ET MAINTENANCE ACADÉMIE D ORLÉANS-TOURS 2 EME TRIMESTRE NIVEAU : TMRIM Étude d une LiveBox
Transmission d informations sur le réseau électrique
Transmission d informations sur le réseau électrique Introduction Remarques Toutes les questions en italique devront être préparées par écrit avant la séance du TP. Les préparations seront ramassées en
VTP. LAN Switching and Wireless Chapitre 4
VTP LAN Switching and Wireless Chapitre 4 ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Pourquoi VTP? Le défi de la gestion VLAN La complexité de gestion des VLANs et des
Présentation du modèle OSI(Open Systems Interconnection)
Présentation du modèle OSI(Open Systems Interconnection) Les couches hautes: Responsables du traitement de l'information relative à la gestion des échanges entre systèmes informatiques. Couches basses:
Networking Solutions. Worldwide VSAT Maintenance VSAT dans le Monde Entretien. Satellite Communications Les Communications par Satellite
www.dcs-eg.com DCS TELECOM SAE is an Egyptian based Information Technology System Integrator company specializes in tailored solutions and integrated advanced systems, while also excelling at consulting
Observer. Un outil adapté à la VoIP
Observer Un outil adapté à la VoIP ELEXO 20 Rue de Billancourt 92100 Boulogne-Billancourt Téléphone : 33 (0) 1 41 22 10 00 Télécopie : 33 (0) 1 41 22 10 01 Courriel : [email protected] TVA : FR00722063534
20/09/11. Réseaux et Protocoles. L3 Informatique UdS. L3 Réseaux et Protocoles. Objectifs du cours. Bibliographie
L3 Réseaux et Protocoles Jean-Jacques PANSIOT Professeur, Département d informatique UdS Pansiot at unistra.fr TD/TP : Damien Roth 2011 Réseaux et Protocoles 1 Objectifs du cours Mécanismes de base des
Les marchés Security La méthode The markets The approach
Security Le Pôle italien de la sécurité Elsag Datamat, une société du Groupe Finmeccanica, représente le centre d excellence national pour la sécurité physique, logique et des réseaux de télécommunication.
Mesures de performances Perspectives, prospective
Groupe de travail Métrologie http://gt-metro.grenet.fr Mesures de performances Perspectives, prospective [email protected] [email protected] [email protected] Agenda Métrologie multi
Sécurité de la ToIP Mercredi 16 Décembre 2009. CONIX Telecom [email protected]
Sécurité de la ToIP Mercredi 16 Décembre 2009 CONIX Telecom [email protected] Téléphonie sur IP vs téléphonie classique Quel est le niveau de sécurité de la téléphonie classique? 2 La différence c
Forthcoming Database
DISS.ETH NO. 15802 Forthcoming Database A Framework Approach for Data Visualization Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of
Cours n 12. Technologies WAN 2nd partie
Cours n 12 Technologies WAN 2nd partie 1 Sommaire Aperçu des technologies WAN Technologies WAN Conception d un WAN 2 Lignes Louées Lorsque des connexions dédiées permanentes sont nécessaires, des lignes
0,3YDQGLWVVHFXULW\ FKDOOHQJHV 0$,1²0RELOLW\IRU$OO,31HWZRUNV²0RELOH,3 (XUHVFRP:RUNVKRS %HUOLQ$SULO
0,3YDQGLWVVHFXULW\ FKDOOHQJHV 0$,1²0RELOLW\IRU$OO,31HWZRUNV²0RELOH,3 (XUHVFRP:RUNVKRS %HUOLQ$SULO COMBES Jean-Michel CHARLES Olivier [email protected] [email protected]
Spécial Catégorie 6 Patch Cords
Spécial Catégorie 6 Patch Cords Patent Pending Sommaire 1 - Préliminaires... 2 2 Qu est ce qu apporte la catégorie 6... 3 3 Qu est ce que l interopérabilité...3 4 Ce que PatchSee annonçait en septembre
ISO/CEI 11172-3 NORME INTERNATIONALE
NORME INTERNATIONALE ISO/CEI 11172-3 Première édition 1993-08-01 Technologies de l information - Codage de l image animée et du son associé pour les supports de stockage numérique jusqu à environ Ii5 Mbit/s
Le socle de sécurité nouvelle génération Consolider, virtualiser et simplifier les architectures sécurisées
Le socle de sécurité nouvelle génération Consolider, virtualiser et simplifier les architectures sécurisées sans compromis. Florent Fortuné [email protected] 21 Mai 2008 Evolution des architectures
QoS et Multimédia SIR / RTS. Introduction / Architecture des applications multimédia communicantes
QoS et Multimédia SIR / RTS Introduction / Architecture des applications multimédia communicantes Isabelle Guérin Lassous [email protected] http://perso.ens-lyon.fr/isabelle.guerin-lassous
Stratégie DataCenters Société Générale Enjeux, objectifs et rôle d un partenaire comme Data4
Stratégie DataCenters Société Générale Enjeux, objectifs et rôle d un partenaire comme Data4 Stéphane MARCHINI Responsable Global des services DataCenters Espace Grande Arche Paris La Défense SG figures
Agrégation de liens xdsl sur un réseau radio
Agrégation de liens xdsl sur un réseau radio Soutenance TX Suiveur: Stéphane Crozat Commanditaire: tetaneutral.net/laurent Guerby 1 02/02/212 Introduction 2 Introduction: schéma 3 Définition d un tunnel
Réseaux grande distance
Chapitre 5 Réseaux grande distance 5.1 Définition Les réseaux à grande distance (WAN) reposent sur une infrastructure très étendue, nécessitant des investissements très lourds. Contrairement aux réseaux
Réseaux Locaux. Objectif du module. Plan du Cours #3. Réseaux Informatiques. Acquérir un... Réseaux Informatiques. Savoir.
Mise à jour: Mars 2012 Objectif du module Réseaux Informatiques [Archi/Lycée] http://fr.wikipedia.org/ Nicolas Bredèche Maître de Conférences Université Paris-Sud [email protected] Acquérir un... Ressources
Configuration d'un trunk SIP OpenIP sur un IPBX ShoreTel
Configuration d'un trunk SIP OpenIP sur un IPBX ShoreTel Note d application Produit : ShoreTel SIP Trunks OpenIP Version système: 14.2 Version système : 14.2 ShoreTel & SIP trunk OpenIP 1 ShoreTel & SIP
Ch2 La modélisation théorique du réseau : OSI Dernière maj : jeudi 12 juillet 2007
Ch2 La modélisation théorique du réseau : OSI Dernière maj : jeudi 12 juillet 2007 I. LA NORMALISATION... 1 A. NORMES... 1 B. PROTOCOLES... 2 C. TECHNOLOGIES RESEAU... 2 II. LES ORGANISMES DE NORMALISATION...
Introduction aux Technologies de l Internet
Introduction aux Technologies de l Internet Antoine Vernois Université Blaise Pascal Cours 2006/2007 Introduction aux Technologies de l Internet 1 Au programme... Généralités & Histoire Derrière Internet
1.Introduction - Modèle en couches - OSI TCP/IP
1.Introduction - Modèle en couches - OSI TCP/IP 1.1 Introduction 1.2 Modèle en couches 1.3 Le modèle OSI 1.4 L architecture TCP/IP 1.1 Introduction Réseau Télécom - Téléinformatique? Réseau : Ensemble
Vers une approche Adaptative pour la Découverte et la Composition Dynamique des Services
69 Vers une approche Adaptative pour la Découverte et la Composition Dynamique des Services M. Bakhouya, J. Gaber et A. Koukam Laboratoire Systèmes et Transports SeT Université de Technologie de Belfort-Montbéliard
IPFIX (Internet Protocol Information export)
IPFIX (Internet Protocol Information export) gt-metro, réunion du 20/11/06 [email protected] 20-11-2006 gt-metro: IPFIX 1 Plan Définition d IPFIX Le groupe de travail IPFIX Les protocoles candidats
physicien diplômé EPFZ originaire de France présentée acceptée sur proposition Thèse no. 7178
Thèse no. 7178 PROBLEMES D'OPTIMISATION DANS LES SYSTEMES DE CHAUFFAGE A DISTANCE présentée à l'ecole POLYTECHNIQUE FEDERALE DE ZURICH pour l'obtention du titre de Docteur es sciences naturelles par Alain
Analyse de la bande passante
Analyse de la bande passante 1 Objectif... 1 2 Rappels techniques... 2 2.1 Définition de la bande passante... 2 2.2 Flux ascendants et descandants... 2 2.3 Architecture... 2 2.4 Bande passante et volumétrie...
Project 1 Experimenting with Simple Network Management Tools. ping, traceout, and Wireshark (formerly Ethereal)
Project 1 Experimenting with Simple Network Management Tools ping, traceout, and Wireshark (formerly Ethereal) (A) (25%) Use the ping utility to determine reach-ability of several computers. To run a ping
Formation Cisco CCVP. Quality of Service. v.2.1
Formation Cisco CCVP Quality of Service v.2.1 Formation Cisco Certified Voice Professional La formation Cisco CCVP proposée par EGILIA Learning présente toutes les connaissances fondamentales et pratiques,
THE EVOLUTION OF CONTENT CONSUMPTION ON MOBILE AND TABLETS
THE EVOLUTION OF CONTENT CONSUMPTION ON MOBILE AND TABLETS OPPA investigated in March 2013 its members, in order to design a clear picture of the traffic on all devices, browsers and apps. One year later
Les Réseaux Privés Virtuels (VPN) Définition d'un VPN
Les Réseaux Privés Virtuels (VPN) 1 Définition d'un VPN Un VPN est un réseau privé qui utilise un réseau publique comme backbone Seuls les utilisateurs ou les groupes qui sont enregistrés dans ce vpn peuvent
Réseau longue distance et application distribuée dans les grilles de calcul : étude et propositions pour une interaction efficace
1 Réseau longue distance et application distribuée dans les grilles de calcul : étude et propositions pour une interaction efficace Réseau longue distance et application distribuée dans les grilles de
LTE dans les transports: Au service de nouveaux services
LTE dans les transports: Au service de nouveaux services 1 LTE dans les transports: Au service de nouveaux services Dr. Cédric LÉVY-BENCHETON Expert Télécom, Egis Rail [email protected] Résumé
ERA-Net Call Smart Cities. CREM, Martigny, 4 décembre 2014 Andreas Eckmanns, Responsable de la recherche, Office Fédéral de l énergie OFEN
ERA-Net Call Smart Cities CREM, Martigny, 4 décembre 2014 Andreas Eckmanns, Responsable de la recherche, Office Fédéral de l énergie OFEN Une Smart City, c est quoi? «Une Smart City offre à ses habitants
2. DIFFÉRENTS TYPES DE RÉSEAUX
TABLE DES MATIÈRES 1. INTRODUCTION 1 2. GÉNÉRALITÉS 5 1. RÔLES DES RÉSEAUX 5 1.1. Objectifs techniques 5 1.2. Objectifs utilisateurs 6 2. DIFFÉRENTS TYPES DE RÉSEAUX 7 2.1. Les réseaux locaux 7 2.2. Les
L ESPACE À TRAVERS LE REGARD DES FEMMES. European Economic and Social Committee Comité économique et social européen
L ESPACE À TRAVERS LE REGARD DES FEMMES 13 European Economic and Social Committee Comité économique et social européen 13 This publication is part of a series of catalogues published in the context of
HAUTE DISPONIBILITÉ DE MACHINE VIRTUELLE AVEC HYPER-V 2012 R2 PARTIE CONFIGURATION OPENVPN SUR PFSENSE
HAUTE DISPONIBILITÉ DE MACHINE VIRTUELLE AVEC HYPER-V 2012 R2 PARTIE CONFIGURATION OPENVPN SUR PFSENSE Projet de semestre ITI soir 4ème année Résumé configuration OpenVpn sur pfsense 2.1 Etudiant :Tarek
Groupe Eyrolles, 2000, 2004, ISBN : 2-212-11330-7
Groupe Eyrolles, 2000, 2004, ISBN : 2-212-11330-7 Sommaire Cours 1 Introduction aux réseaux 1 Les transferts de paquets... 2 Les réseaux numériques... 4 Le transport des données... 5 Routage et contrôle
Téléinformatique. Chapitre V : La couche liaison de données dans Internet. ESEN Université De La Manouba
Téléinformatique Chapitre V : La couche liaison de données dans Internet ESEN Université De La Manouba Les techniques DSL La bande passante du service voix est limitée à 4 khz, cependant la bande passante
Editing and managing Systems engineering processes at Snecma
Editing and managing Systems engineering processes at Snecma Atego workshop 2014-04-03 Ce document et les informations qu il contient sont la propriété de Ils ne doivent pas être copiés ni communiqués
Administration des ressources informatiques
1 2 La mise en réseau consiste à relier plusieurs ordinateurs en vue de partager des ressources logicielles, des ressources matérielles ou des données. Selon le nombre de systèmes interconnectés et les
RTP et RTCP. EFORT http://www.efort.com
RTP et RTCP EFORT http://www.efort.com Pour transporter la voix ou la vidéo sur IP, le protocole IP (Internet Protocol) au niveau 3 et le protocole UDP (User Datagram Protocol) au niveau 4 sont utilisés.
AGROBASE : un système de gestion de données expérimentales
AGROBASE : un système de gestion de données expérimentales Daniel Wallach, Jean-Pierre RELLIER To cite this version: Daniel Wallach, Jean-Pierre RELLIER. AGROBASE : un système de gestion de données expérimentales.
Master e-secure. VoIP. RTP et RTCP
Master e-secure VoIP RTP et RTCP Bureau S3-354 Mailto:[email protected] http://saquet.users.greyc.fr/m2 Temps réel sur IP Problèmes : Mode paquet, multiplexage de plusieurs flux sur une même ligne,
Contents Windows 8.1... 2
Workaround: Installation of IRIS Devices on Windows 8 Contents Windows 8.1... 2 English Français Windows 8... 13 English Français Windows 8.1 1. English Before installing an I.R.I.S. Device, we need to
Chapitre 1: Introduction générale
Chapitre 1: Introduction générale Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ Table des matières Définitions et examples Architecture
Les Réseaux Informatiques
Les Réseaux Informatiques Licence Informatique, filière SMI Université Mohammed-V Agdal Faculté des Sciences Rabat, Département Informatique Avenue Ibn Batouta, B.P. 1014 Rabat Professeur Enseignement
LA COUCHE PHYSIQUE EST LA COUCHE par laquelle l information est effectivemnt transmise.
M Informatique Réseaux Cours bis Couche Physique Notes de Cours LA COUCHE PHYSIQUE EST LA COUCHE par laquelle l information est effectivemnt transmise. Les technologies utilisées sont celles du traitement
STREAMCORE. Gestion de Performance et Optimisation Réseau
sc STREAMCORE Gestion de Performance et Optimisation Réseau Gestion de Performance et Optimisation Réseau avec Streamcore Visualisation des performances applicatives sur le réseau Surveillance de la qualité
Chapitre 11 : Le Multicast sur IP
1 Chapitre 11 : Le Multicast sur IP 2 Le multicast, Pourquoi? Multicast vs Unicast 3 Réseau 1 Serveur vidéo Réseau 2 Multicast vs Broadcast 4 Réseau 1 Serveur vidéo Réseau 2 Multicast 5 Réseau 1 Serveur
PROJECT POUR LE SYSTÈME DE SURVEILLANCE PAR CAMERA BASÉ SUR TECHNOLOGIE AXIS, PANNEAUX SOLAIRES ET LUMIERE DU LEDS BLOC D APARTEMENT LAURIER.
PROJECT POUR LE SYSTÈME DE SURVEILLANCE PAR CAMERA BASÉ SUR TECHNOLOGIE AXIS, PANNEAUX SOLAIRES ET LUMIERE DU LEDS BLOC D APARTEMENT LAURIER. OPCION 1: Cameras et Hardware block 1 et 2 avec Cameras à l
Short Message Service Principes et Architecture
Short Message Service Principes et Architecture EFORT http://www.efort.com Défini dans le cadre des spécifications GSM phase 2, le service de messages courts (S, Short Message Service) encore appelé "texto",
Prérequis réseau constructeurs
Prérequis réseau constructeurs - Guide de configuration du réseau Page 2 - Ports utilisés - Configuration requise - OS et navigateurs supportés Page 4 Page 7 Page 8 Guide de configuration du réseau NB:
SEMINAIRES & ATELIERS EN TÉLÉCOMMUNICATIONS RESEAUX
SEMINAIRES & ATELIERS EN TÉLÉCOMMUNICATIONS & RESEAUX SEMINAIRE ATELIER SUR LA TELEPHONIE ET LA VOIX SUR IP (T-VoIP): DE LA THEORIE A LA PRATIQUE DEPLOIEMENT D UNE PLATEFORME DE VoIP AVEC ASTERIK SOUS
Définition et diffusion de signatures sémantiques dans les systèmes pair-à-pair
Définition et diffusion de signatures sémantiques dans les systèmes pair-à-pair Raja Chiky, Bruno Defude, Georges Hébrail GET-ENST Paris Laboratoire LTCI - UMR 5141 CNRS Département Informatique et Réseaux
LA VIDÉOSURVEILLANCE SANS FIL
LA VIDÉOSURVEILLANCE SANS FIL Par Garry Goldenberg ALVARION [email protected] INTRODUCTION Dans un monde de plus en plus sensible aux problèmes de sécurité, les systèmes de vidéosurveillance
II/ Le modèle OSI II.1/ Présentation du modèle OSI(Open Systems Interconnection)
II/ Le modèle OSI II.1/ Présentation du modèle OSI(Open Systems Interconnection) II.2/ Description des couches 1&2 La couche physique s'occupe de la transmission des bits de façon brute sur un canal de
REVITALIZING THE RAILWAYS IN AFRICA
REVITALIZING THE RAILWAYS IN AFRICA Contents 1 2 3 4 GENERAL FRAMEWORK THE AFRICAN CONTINENT: SOME LANDMARKS AFRICAN NETWORKS: STATE OF PLAY STRATEGY: DESTINATION 2040 Contents 1 2 3 4 GENERAL FRAMEWORK
TABLE DES MATIERES A OBJET PROCEDURE DE CONNEXION
1 12 rue Denis Papin 37300 JOUE LES TOURS Tel: 02.47.68.34.00 Fax: 02.47.68.35.48 www.herve consultants.net contacts@herve consultants.net TABLE DES MATIERES A Objet...1 B Les équipements et pré-requis...2
Présentation et portée du cours : CCNA Exploration v4.0
Présentation et portée du cours : CCNA Exploration v4.0 Dernière mise à jour le 3 décembre 2007 Profil des participants Le cours CCNA Exploration s adresse aux participants du programme Cisco Networking
Institut français des sciences et technologies des transports, de l aménagement
Institut français des sciences et technologies des transports, de l aménagement et des réseaux Session 3 Big Data and IT in Transport: Applications, Implications, Limitations Jacques Ehrlich/IFSTTAR h/ifsttar
EFFETS D UN CHIFFRAGE DES DONNEES SUR
EFFETS D UN CHIFFRAGE DES DONNEES SUR LA QUALITE DE SERVICES SUR LES RESEAUX VSAT (RESEAUX GOUVERNEMENTAUX) Bruno VO VAN, Mise à jour : Juin 2006 Page 1 de 6 SOMMAIRE 1 PRÉAMBULE...3 2 CRITÈRES TECHNOLOGIQUES
Outils d'analyse de la sécurité des réseaux. HADJALI Anis VESA Vlad
Outils d'analyse de la sécurité des réseaux HADJALI Anis VESA Vlad Plan Introduction Scanneurs de port Les systèmes de détection d'intrusion (SDI) Les renifleurs (sniffer) Exemples d'utilisation Conclusions
Réseaux Mobiles et Haut Débit
Réseaux Mobiles et Haut Débit Worldwide Interoperability for Microwave Access 2007-2008 Ousmane DIOUF Tarik BOUDJEMAA Sadek YAHIAOUI Plan Introduction Principe et fonctionnement Réseau Caractéristiques
Pour vos questions ou une autorisation d utilisation relative à cette étude vous pouvez contacter l équipe via [email protected]
ES ENSEIGNEMENTS DU NOUVEAU LES 4GMARK EN DU NOUVEAU FULLTEST EN 3G FULLTEST 3G France Métropolitaine Juillet/Aout 2014 Contenu 1. PRINCIPE DE L ETUDE... 2 1.1 Contexte... 2 1.2 Objectif... 3 2. PERIMETRE
Présentation Générale
Présentation Générale Modem routeur LAN Inte rnet Système de connectivités Plan Modem synchrone et Asynchrone La famille xdsl Wifi et WiMax Le protocole Point à Point : PPP Le faisceau hertzien Et le Satellite.
