miércoles, 6 de junio de 2012


RPL Protocol

RPL is a routing protocol for low power and lossy networks. It’s used not only for 6LoWPAN. This protocol supports Point to Point and Multipoint to point and it’s able to be used for large networks.

It constructs a Destination Oriented Acyclic Graph (DODAG) that could be different in the same network. It would provide different routing criteria to accommodate different types of traffic.

In RPL is easy to adopt network changes and it has a Loop Avoidance mechanism to fight against loop that could appear in node unreachability or link congestion.

RPL takes into account both link and node properties when choosing paths and it hasn`t security because is an optional extension.  It assumes other mechanisms can be used such as link layer.


ISA SP100.11a

In this lecture Juanjo, Bethzalie and I tried to explain what ISA SP100.11.a is and his characteristics.

This is a Standard created by ISA, that is an international nonprofit member association consisting of automation professional engaged in the design, development, productions, and application of device and systems that sense measure and control industrial processes and manufacturing operation.

This standard assure multi-vendor device interoperability, provide simple, flexible, and scalable security addressing major industrial threats and also a wireless connectivity standard for applications in classes 1 – 5, and possibly class 0.

The principal characteristics of this standard are: robust, scalable and flexible architecture, provides a low-bitrate and very low power consumption ,communication with a transparent network management to the user, allows adaptive routing, multiple network topologies, addressing systems and redundancy levels, use some mechanism for granting a sustainable amount of devices in the network, a good spectrum management, coexistence with other networks and reliability in the communications and use security techniques like authentication methods, encryption or time-stamping.  

SMART CITIES

In this lecture Pegah and Marcos explain what a Smart City is. It’s a city that uses data and information technologies to provide better services to citizens, track progress toward policy goals, optimize the existing infrastructure, enable new business model for public and private sector service provision and collaborate within government and citizens.

There are three core components of a smart city: the technological factors, the people factors and the institutional factors. The advantages of a smart city are the smarter healthcare, public safety, transportation (smart mobility), energy, education and government services.

About the smart city technologies, there are four different types implemented:

  • ·         Data Collecting technologies to form a sensor and actuator network
  • ·         Data Transmission technologies to provide a high speed Internet Infrastructure to allow for a city wide access to the information
  • ·         Data Storage and Processing Technologies
  • ·         Service Delivery Platform



GPON

GPON is a standard about PON. PON are a passive optical network and it has the next steps: Fiber-optic access network, Point to Multipoint, FTTx architecture network, Unpowered Optical Splitter (single optical fiber), Optical Line Terminal (OLT) and the Optical Network Units (ONUs). The way to allocate BW between ONUs is using the Dynamic Bandwidth Allocation (DBA) that is more effective method is required, allocate BW only if needed and uses statistical Multiplexing.

Fiber to the x (FTTx) is a generic term for any broadband network architecture using optical fiber. It was initially a generalization for several configurations of fiber deployment (FTTN, FTTC, FTTB, FTTH...)
An optical network unit (ONU) is a device that transforms incoming optical signals into electronics at a customer's premises in order to provide telecommunications services over an optical fiber network.

An optical line termination (OLT), also called an optical line terminal, is a device which serves as the service provider endpoint of a passive optical network.

Very important is the problem of optimal network planning. Build a PON infrastructure is difficult because it implies installation of many ODN's and there are architecture constraints that have to be respected with cost minimization.

In conclusion, SR algorithms are more promising than NSR. SLA should also consist packet delay requirement and GPON architecture does not immediately support LLU.

EPS: LTE + SAE

In this lecture Henar talks about LTE (Long Term Evolution) ,the natural evolution of UMTS. The most important reason to look for LTE has been the need to have more capacity of users simultaneously. Moreover, it allows higher speeds, as we have said 200 Mbps for downlink. Another important aspect is that with LTE all is based on IP.

About SAE (System Architecture Evolution), she explains that provides lower latency, costs (CAPEX and OPEX) and complexity and higher compatibility with other technologies. This architecture is considered functional, scalable, relatively cheap, flexible with other technologies and secure.

LTE/SAE together provides spectrum flexibility, reduced TCO and high performance for Mobile Broadband networks and smooth migration to a flat and optimized 2-node architecture. It also provides cost efficiency, high performance and network migration being targeted and a scalable and robust architecture. In addition, it’s all IP based and it’s compatible with 2G / 3G and other technologies.


HSPA and HSPA+

HSPA was the key technology that made 3G systems popular, because it really marks the difference in terms of bitrate with the 2G systems.

 HSPA implements MIMO and a 16/64 QUAM modulation in the last release.

With HSDPA/HSUPA the first step towards a flattened network was done. In HSPA+ the intention is going even further. The improvements of HSPA are a lower latency, high data rates, increased capacity, better support to VoIP and improved support for multicast services.

The Multimedia Broadcast Multicast Services (MBMS) is designed to provide efficient delivery of broadcast and multicast services, like TVIP or videoconferences. This service was already included in previous releases, but until HSPA+ its real implementation was not feasible at all.

MIMO is a technology which uses multiple antennas at the transmitter and receiver. It offers increases in data throughput and link range without additional bandwidth or increased transmit power.

Femto-cell technology

In this exposition Bruna and Martin talks about Femto-cell technology. It is a Small cellular base station (low-power access point) that provides indoor coverage and network via broadband network.  It is connected to service providers and supports 4-8 simultaneous voice conversations & data services. His best characteristic is that it is applicable to all standards (3GPP & 3GPP2).

The Deplopment of femtocells will increase because femtocell offers better coverage and capacity in both home and office environments. Operators face challenges in providing a low-cost solution while mitigating RF interference, providing QoS over the IP backhaul, and maintaining scalability.

 3GPP has undertaken a large effort to define industry standards for all of the essential aspects of UMTS-based femtocells. Femtocells are going to change the landscape of mobile technology and networking business in the coming years. 

3GGP2 Standardization enables a better implementation of femtocells with the combination of WiMAX and femtocells the usage of mobile devices will be improved, but there are still some challenges to handle.

SIP protocol

In this lecture, Federico and Monika talks about SIP architecture, his components, his addressing and localization, the security, real implementations and competitors.

SIP is a signaling protocol to establish “session” in internet. Allow the exchange of “session description information” between the caller and the called. It is an elastic protocol that allows new features through extensions. The basics functions of a signaling protocol are:

- manage the user’s registration and their localization
- have a mechanism to establish and destroy connections end-to-end by some request message
- Route response message from destination to the source in order to ask to the caller

 SIP uses easy client-server architecture with 2 different messages: request or known as well as method and response. About security, SIP provides also security in each layer of the ISO/OSI stack, it supports Ipsecurity and TLS. Real SIP implementations are the VoIP phones. VoIP is getting more and more popular, and at the same time SIP is getting more and more complicated. It is possible that it will be replaced by some different protocols, but now its position on the market is stable.

Deployment of IPv6

Pawel and Francisco talks about how IPv4 address is exhausted and how IPv6 works: Basic specifications and transmission mechanisms.

The depletion of the pool of unallocated IPv4 addresses force the evolution to a new IP version. IP version 6 has headers differences with his predecessor and new characteristics: h IPsec mandatory, simplified processing in routers, mobility, Jumbograms and end-to-end connectivity (reduce the cost).

About the transition mechanism, the migration provides address translation (interconnection between Ipv4) and Ipv6 domains (SIID). They also provide tunnels to connect Ipv6 via Ipv4 clouds.

In conclusion, as sooner as we implement Ipv6, the better profits in ICT we will have. But they have to take into account that not every member of a market has a motivation to implement new solutions.

i2010 and Digital Agenda for Europe

Hugo and Sergi talks in first time about i2010. It’s the EU framework for  the European Information for growth and employment. It embraces all aspects of the information, communication and audiovisual sector.  It provides the broad policy guidelines for ICT from 2005 to 2010. The eInclusion, yet to be obtained.

 Then they talk about the Single Market and its advantages, the use of R&D, the online safety and the impact of the  young users of Internet.

About Digital Agenda, they talk about the objective of prepare EU economy for the challenges during the decade and define key role of ICTs in future Europe. Digital Agenda is one of seven flagship programs, the successor to i2010. Today there is a risk that EU's ICT sector is left behind. Digital Agenda aims to improve the ICT sector to the benefit of European citizens and enterprises. 

The deadline for actions is in 2020. Then we talk about Digital agenda is currently well on the way to reach targets and eGovernment are improving a lot. It also will provide Ultra-fast internet and cross-border activities halting.

From Possession to Access: New ways of Internet Use

In this season Marko and Albert talks about Streaming Technologies, his network architecture and different applications like Video or Music Streaming.

Streaming is the major trend today and that’s because it has no delays and you can access to the content almost instantly. It also provides the possibility of viewing in real-time. The bandwidth is a critical factor.
About the network architecture and protocols, it is necessary that data arrives quickly and ordered. That the reason why streaming video and audio use protocols that allow the transfer of data in real time like RTSP, MMS, RTMP…

They explain that now, we are in the process of a paradigm shift because of the market dependency (more available services) but not all can use them because of slow connections in some countries.
Nowadays, streaming is accessible from a wide variety of platforms and there are a lot of different business models with different streaming services (like youtube or Netflix for video streaming and Spotify or LastFM for music). 

Cloud Computing:

In this lecture Grace and Aleksandra talks about Cloud computing, a new way to use the web. 

Cloud computing is a group of services that deliver computing and storage capacity to a certain users.

About his use, they said that it is growing amount of consumer-friendly applications and there are a few Cloud applications like Evernote, Dropbox, Google Docs. For an Enterprise use, is commonly use Cloud Computing for  renting infrastructure or platform helps small developers entering the market and help companies that do not lead the business in the IT industry to have its IT department.

In a near future, Cloud computing will incorporate developments in Cloud-centric operating systems (Google Chrome book), integration of cloud services into the OS and new, interesting Hybrid concepts.

Benchmarking of the Internet Services:

In this exposition Mark and Luis talks about the Internet services. Google and Microsoft provide a huge number or Internet services (most of them for free) accessible with the same login name.

Google provides diverse services for personal communication and professional business with good compatibility with existing user software; it has more services than Microsoft and is continuously developing more. Either for particular users or for small enterprise Google services are the best.

The number of services available in the network is really huge.  Most of them have become essential (e-mail) and some of them will become (softphones, file hosting…) either for particular users and small enterprises.

The features that particular users and small enterprises search are different but some services fits well with both criteria. Internet services are present and future of services.

Benchmarking of the different Spanish Internet Service providers:

Carlos and Olsi talks about ISP’s in Spain and his classifications. ISP (Internet Service Provider) is a company or a group of them that provides Access to Internet. In Spain all ISP offer integrated service of internet , telephony (fixed and mobile) and TV for both residential clients and business ones.

 Telefonica and ONO are the best choice ISPs for internet service only, which offer affordable prices for both ADSL and fiber optics. ONO seems a better choice for wide band service. Vodafone and Orange compete each by offering the same services of mobile and internet with similar prizes. 

Orange seems more attractive for the new users. Jazztel could be a good choice for services of fixed telephony and internet.

New trends in TV broadcasting using Internet

In this exposition Anna and Juan talks about DTTV technology, IPTV Technology and Multimedia Home platform (MHP).

About IPTV they said that redefines the way we watch TV. The bandwidth consumed by the distribution of IPTV signal is superior on several measures to employing in DTTV and triple-play services over a converged network. This technology is Interactive in nature.

About MHP, they said it is an open middleware system standard designed by the DVB project for interactive digital television. It enables the reception and execution of interactive, Java-based applications on a TV-set. 2. Interactive TV applications can be delivered over the broadcast channel, together with audio and video streams. These applications can be for example information services, games, interactive voting, e-mail, SMS or shopping.

HbbTV is an intermediate step that integrates IPTV and DTTV.  In a moment the DTTV and IPTV are integrated transparently to the user, providing mobility, quality and universal access (NextTV).

New Trends is web Searching

My partners Guillermo and Damià talks about the new trends in web searching. Specifically, they talk about Semantic web, web 3.0 and google’s algorithm.

About Semantic web, they said in resume that all the tools are ready to use it just need the implementation of the webs with it. Is the natural way of evolution to get a great knowledge database, machines do the work and the humans only see the results that they want to see. However, the transition will not be easy to due to the amount of data to be reprocessed and the change of the mentality of the people.

About web 3.0, they said this is uncertainly. It will do life easier for humans and the technology and languages that it would implement are still unknown.

About google, they said that is excellent in his work. His success is regarded to I+D and is creating a new way of how to work making business.

Internet Scalability (Part 2)
In this lecture Florin continues with the explication of the possible solutions.

-Evolutionary Internet Architectures (Location/ID Split): IP addresses have two complementary roles (Identifier and Locator). Change of locator results in change of identifier thus breaking ongoing flows. The solution is to separate the two functions

– Host-based approach: The principal characteristics of this are that it converts packets such that transport layer is exposed only to identifiers and Locators are present just at network routing level, Host manages locators and Host obtains Loc/ID binding.

– Network-based approach: Hosts are unchanged; each host has a stable IP address. IP address used as an identifier and The ID is not globally routable.

LISP-> It requires:  No changes at the end-hosts, a few network equipment to be changed and incremental deployment. Now, exists two addresses, the end-host ID address space (EID) and the routing locators address space (RLOC), due to the hosts and majory of routers are unaffected, no changes within the core networks, the mapping systems needs to be added and the introduction of tunnel routers, which serves to don't advertise their EID prefixes into BGP anymore.

One EID can be associated with more than one RLOC, so we can stablish priorities and weights. The benefits of LISP are: decrease of the Default Free Zone routing table, proper multihoming support,  no changes at the hosts and only few routers need to be changed, but also has some issues: the path reachability problem, mapping system scalability, deployment of scenarios, etc. The first can be increased with the LISP specific mechanisms, the second with the LISP Tree what is based on the DNS ideas and the third with draft-jakab-lisp-deployment.

Internet Scalability

Florin Coras talks in this lecture about the Inter-Domain Routing Problem and representative solutions. The Inter-Domain Routing Problem allows the exchange of data between peers along the best path that possibly crosses several transit provider domains and fulfills the routing policies of each domain independent of its network topology. Each peer is an Autonomous System. The Border Gateway Protocol (BGP) is a commom inter-AS routing protocol and his principal function is to exhange network reachability info with other BGP systems.

 Each AS announces only what it considers its best path and each domain is able to define its own routing policy thanks to BGP. In practice there are two policies: customer-provider peering and shared-cost peering. The decission process consists in select preferred routes (local-pref) manually configured, select shortest AS path route (topology dependent) and, In case of ties, use tie-breaking rules.

About the BGP Routing Tables we have the BGP Routing Information Base (RIB) that aggregates all BGP reachability announcements and saved in control plane memory. We also have the BGP Forwarding Information Base (FIB). The output of the BGP decision process ran on the RIB. It contains one route per destination prefix, used when forwarding packets and saved in data plane memory (fast memory).

The routers need to store information about all the destinations: prefixes and the AS. There so many prefixes due to the multihoming, the traffic engineering and the IANA allocation policies. With the existing mechanisms the ingress TE is problematic. Like the reachability is announced to all Internet the size of the FIB can produce a memory problem and the updates propagates to everybody can produce CPU limitations.

The Florin talks about the possible solutions: The Disruptive (clean-state) solutions and the evolutionary solutions.

- Clean State Architectures (NNC): the problems are the content availability, the security, the content location dependence, where vs what, named hosts vs named data and the host-to-host vs many-to-many.
The communications are built on named data with no notion of the hosts
The content node model has two types of packets: the interest and the data. Both are one for one and the data packets carry security information.

The forwarding engine: exists the Forwarding Information Base (FIB), the content store and the Pending Interest Table (PIT)

Transport: operates on top of unreliable packet delivery services, has a flow control and the CNN can take advantages of multiple interfaces. The CNN names are composed of components and have a relative access to data in totally ordered tree.

Routing (Intra-Domain): works with link state IGP (OSPF and IS-IS). Can customize the link-state and have a behavioral difference from IP. BGP has the equivalent of a IGP TLV and he topology at the AS rather than network prefix.

Implementation: is in C and Java, the interest and the data packets are sent over UDP and the VoCCN implementation is done with linphone. Conclusions: is less efficient than TCP but better than HTTPS CNN and HTTP.

IP MOBILITY MANAGEMENT

Two basic aspects are envisaged in mobility management:  mobile terminal idle and mobile terminal Active. A signaling traffic load is associated to the mobility tracking procedures, with strong impact into the: Fixed network (With a significant amount of bandwidth) and Radio interface (The electromagnetic resource is rather scarce).

About the implementations in Cellular systems, Vicente Casares talks about the location Management in the different systems:

 2G -> two level hierarchical structures (HLR and VLR) and a cell lay-out configuration in location areas: LA.

 2.5G (GPRS) -> a two level hierarchical structure (SLR and GR as an extension of the HLR) and the cell lay-out configuration as routing Areas

 3G (UMTS) -> A two level hierarchical structure (SGSN and GGSN) and a cell lay-out that combines UTRAN Registration Area and Routing Areas

About the solutions for all-IP networks, two types or levels of mobility are defined:

-Macromobility : The movement of mobile users between two network domains. Is defined by MIP and his enhancement.  MIP allow to move from one network to another without having to make a change in their IP address, allows nodes to maintain all ongoing communications while moving and is scalable, robust and secure. Vicente Casares explains to types: IPv4 and IPv6.

-Micromobility: The movement of mobile users between two subnets within one administrative domain. Diferent tipes of implementations: IDMP: INTRA Domain MM Protocol, MIP-RR (MIP-Regional Registration), HMIP (Hierarchical MIP), CIP (Cellular IP) and HAWAII.

Energy- oriented Internet (Datacenters and Clouds)

Datacenters need 26 GW to work. Within datacenters, the 47% of energy consumption of ICT is for the servers and 34% is for cooling the devices. To improve the energy efficiency there are a few options in function of the situation. When there is operating, virtualization that uses virtual servers to allow resource sharing. When is idle, sleep mode.  In an intermediate situation between idle and operating, a solution could be a job aggregation.

 Another situation is the placement of datacenters. We can reduce the power to cooling the device moving the data centers to a cold places. We talks about the rumor of move the data centers to the poles.
To improve the energy awareness there are the following options: Sleep mode (implements modular architectures with hierarchical devices), elasticity capacity provisioning (adapt the capacity to the traffic fluctuations to turn off the idle servers) and the powerfarm (uses the recursive power on the procedure of a petition and allows parallel operations). The server energy model combines Sleep mode and Job aggregation to save energy, GHG and money. In multicore servers job aggregation is possible.

The developments in the areas of energy-awareness/efficiency and network/site security have been considerable but separate. However there are areas in common. A new perspective of the situation is that attacks could change in their main aims, exploiting weaknesses in power-saving and management mechanisms to disrupt services, or even attempting to increase the energy consumption of an entire farm, causing financial damages.

 It is not a priority to focus on the major power hungry device, but rather on the most energy sensible devices. The system's vulnerability to an attack would affect the energy cost, neutralize energy saving systems (attack can use just the amount necessary to avoid the triggering of the energy-saving mechanisms and increment the operating temperature. It also would exhaust the agreed power budget (exceeding contractual enforcement will result in economic penalties or even overcoming the physical power limits resulting in power outages), increment dirty emissions and leveraging upon IDS/IPS (make them consume more CPU even upon unsuccessful attempt).

In conclusion, attacks may explicitly impact energy-related issues like energy cost, energy consumption or GHC emissions. A possible solution to this problem is power capping that set a maximum power consumption threshold and operates the facility always below that value. Another solution is power monitoring system that if an increment is detected, takes the  corresponding actions to decrease the power like job de-scheduling/migrating, CPU voltage/ frequency scaling, downclocking devices or forcing sleep mode.


Energy- oriented Internet (The Network)

Continuing with the previous class, Sergio Ricciardi talks about the energy consumption in the Networks. He explains that bandwidth has incremented by 1000 in 10 years and energy consumption by 10. An Optical Cross-Connect node (OXC) with micro-electro-mechanical system (MEMS) switching logic consumes about 1.2 W per single 10 Gb/s capable interface, whereas a traditional IP router requires about 237 W per port, so that’s obvious that power consumption of electronic traffic is higher than optical traffic
.
In order to design an Optical infrastructure, have to be taken into account some things. 3R regeneration should be avoided as much as possible in planning, designing and managing new paths. EDFA are more performing (higher gain, lower insertion loss, noise and crosstalk effects) than SOA but have also higher energy consumption. The use of dispersion compensation fibers will reduce the dispersion of the optical signal traversing the fiber and reduce the number of required optical amplifiers.

The Energy consumption is currently dominated by the access network because of the high number of end-point devices. With rising traffic volume, the major consumption is expected to shift from access to core networks. Energy consumption also grows in backbone networks. In conclusion access networks dominate at low rates and network routers dominate at higher rates (reduce hop count, improve router efficiency (technology), employ energy-aware algorithms & protocols, manage routers better (sleep states), develop better network architectures using and manage distribution and replication of contents).

However, current router architectures are not energy-aware. One method to reduce energy consumption is to be focus on energy-aware architectures that can adapt their behavior, and so, their energy consumption, to the current traffic loads (advocated both by standardization bodies and governmental programs and assumed in many literature sources.  The router power consumption has two parts: a fixed and a variable. The fixed part due for the device to stay on and the variable part are somehow proportional to the traffic load.

In an attempt to reduce the energy consumption, the study of the network energy consumption becomes important. We have different energy models: Analytic (Parameters + Mathematical description of the network; unambiguous formula + abstraction + generalization; modeling difficulty + complexity) Experimental (Energy consumption of real world devices, experimentally measured + inter/extra-polating data, cannot be used for future energy-aware architectures) and Theoretical energy models (Theoretical predictions of the energy consumption as functions of the router size and/or the traffic load, simple and clear, predictions may substantially differ on the long run from the real energy consumption values).

Because of much of the time our systems are idle but on, one solution possible is the sleep mode. There are different possibilities to implement this mode: per-node (Downclocking and Energy proportional computing) and per-interface (Adaptive link rate, Low Power Idle, STOP-START).

Per-Interface energy saving techniques have to had into account that faster interfaces require lower energy per bit than slower interfaces and that there are low utilization periods, but the energy consumption  is throughput-independent. To solve the problem (throughput won’t be equal ) we implement the idea of temporarily switching off or downclocking unloaded interfaces and line cards (per interface sleep mode): Adaptive Link Rate (ALR) and Low Power Idle (LPI). ALR dynamically modify the link rate according to the real traffic needs and LPI has the particularity of transmission on single interface is stopped when there is no data to send and quickly resumed when new packets arrive.

Other solutions are implemented to solve the problem of energy-aware.  The OSPF-TE protocol that wants to minimize the GHG emissions by routing connection requests through green network elements. RWA Algorithm has as an objective to minimize GHC emissions, power consumption and costs.

Energy- oriented Internet (The problem)

One of the problems is that human’s activities have severe impacts on the environment. These activities will be measured in three dimensions: energy-consumption, GHC emissions and cost.

 About energy-consumption, ICT (Information and communications technology) consumes 7% worldwide produced electrical energy and 2-3% world’s GHG (Green House Gasses) that is a lot of energy and his consumption will grow up. In terms of energy consumption, The Internet consumes around 240 GW that is the equivalent of 12,6% of the worldwide produced electrical power. It means that for cover these necessities will be necessary around 240 nuclear power plants to power the Internet. This numbers of Internet consumption are caused by the growth of the total Internet traffic and the number of users connected.

The problem of ICT is the vicious cycle: the useful work heats the devices that need to be cooled. So it consumes two times: powering and maintaining (UPS+cooling) the devices. ICT does indirect GHG emissions during the use phase, but network infrastructure consumes 22 GW (1,16 % worldwide produced electrical power).

To find a solution of this problem, will be taken into account Life Cycle Assessment (LCA) and direct and indirect impact to assure that the “solution” does not fall into the rebound effect (increased energy efficiency-> overall reduced costs->increased demand->energy consumption increases->GHG emissions overtake the offset gained  by energy efficiency).

Two possible solutions are Carbon Neutrality and Zero Carbon (removable energy).  The first has different approaches how set a limit and buy credits from virtuous actor (set&trade), pay for damage to compensate emissions (carbon offset), pay as you go (carbon taxes) and incentives. Zero carbon has advantages and drawbacks. The advantages are that this energy is virtually unlimited, it’s free energy (zero costs in the use phase) are beneficial over their entire Life-Cycle. The drawbacks are his low efficiency and that it’s not always available or applicable.

One of the things we can do for contribute to this situation is what is called the energy-oriented paradigm to the internet.  It consists in three blocks with the finality of reduce the energy consumption: Energy-Efficiency (equipment energy consumption, architectures), Energy-Awareness (Intelligent technology, alghorithms&protocols) and Energy-Oriented Infrastructures (Smart Grid and LCA are essential). 

We can apply this paradigm in three levels: application (power state of the computers and the currently set power policy), system/middleware (load balancing, scheduling & task distribution and shut down idle nodes) and Networking (minimize time to transfer data & reduce data to be transmitted, shut down idle interfaces, etc...).

INTERNET ACCES TECHNOLOGIES

In this class Vicente Casares talks about broadband access networks for Internet. If we have to classify all access networks we divide this in 2 types: wired access networks and wireless access networks. In the first type we found xDLS and FTTx. xDLS technology was made with metallic cables and has multiple types of applications like ISDN or Asymetric (ADSL). FTTx was made with Optibal fibers and we found 4 types of implementations: FTTN (Fiber-to-node), FTTC (Fiber to the curb), FTTB (Fiber to the building) and FTTH (Fiber to the home).

In wireless access networks we have more variety because we have Cellular networks (like GSM, 3G or LTE), WLANs, Satellites and Cordless.
He also said that the growth in fixed and mobile traffic will increase a lot in the next years. Cisco VNI predict that global mobile data traffic will double every year through 2014, increasing 39 times between 2009 and 2014.

GSM-> This wireless network allows mobility, provides total ubiquity an allow coverage areas close to 40 Kms. This technology uses TDMA with 124 carriers and a total bandwidth of 25 MHZ for uplink (890-915 MHz) and downlink (935-960 MHz). The E-GSM (extended GSM) cover a little bit more of bandwidth adding 9.8 MHz. GSM uses the frequency bands 1800 MHz (DCS 1800) and 1900 MHz (DCS 1900).  GSM was evolving as the same time the technology improves, adding commercial services like telephony and short messages and improvements in advanced data transmission services together with voice coding techniques.

GPRS->Appear because of necessity of increase the number of data services offered by GSM. The objective of this technology was reach rates around 170 Kbps, use, if needed, the dynamic slot allocation and enhance the service facilities assuming moderated costs without big investments in the GSM infrastructure. This technology combines FDMA and TDMA and offers two classes of services: point-to-point (conexion oriented and less-oriented) and point-to-multipoint (multicast and group-services). This technology has two new nodes:  GGSN (is as logical interface with external data packet networks) and SGSN (is in charge to deliver data packets to the mobile stations which are located within the service area).

UMTS->Appear because of the technical, network and service evolution. This technology uses 12 carriers in FDD for uplink and downlink and 5 carriers TDD and allows conversational (real time), streaming, interactive and background applications. The UMTS systems have been enhanced with HSPA. HSPA consists of two components, HSDPA (define a new transport channel that allows to assign all available resource to one or more users in an efficient manner) and HSUPA (dedicated to channels have been enhanced).

SAE->Service Architecture Evolution. 3GPP was working in the specifications of EPC (Evolved Packet Core).  EPC is a multi-access core network based on the Internet Protocol that enables operators to deploy and operate one common packet core network. It is defined around three important paradigms: Mobility, Policy management and Security.

LTE-> Long Term Evolution Is the evolution of UMTS (3G->4G).  This technology is designed to deliver significantly higher levels of capability and performance. It will co-exist with the WCDMA and HSPA networks and introduces a new radio interface technology based in OFDM. The LTE access is based on shared channel access providing peak data rates of 300 Mbps (Downlink) and 7 Mbps (Uplink). LTE (E-UTRAN) is only connected to the EPC and its protocols and user plane functions have been optimized for the transmission of traffic from IP based real-time and non-real-time applications/services.

Internet of things II

6LowPan-> 6LoWPAN combines IPv6 protocol over the Low-power wireless Area Network.  Its benefits are open, long-lived, reliable standards, easy learning-curve, transparent Internet integration, network maintainability, global scalability and end-to-end data flows. It is used for facility, Building and Home Automation, Personal Sports & Entertainment, Security and Safety and Industrial Automation among others.

6LoWPAN is characterized by Stateless header compression, enable a standard socket API, minimal use of code and memory, direct end-to-end Internet integration and multiple topology options. It also allows an efficient UDP header compression, a network autoconfiguration using neighbour discovery and Unicast, multicast and broadcast support.

His protocol stack has 5 levels, from physical level to application level. It is very useful with low-power link layers such as IEEE 802.15.4, narrowband ISM and power-line communications.
LoWPANs are stub networks. We find three types of different LoWPANS: simple, extended and Ad-hoc. Nowadays, we find some problems when we want to integrate this technology, like the maximum transmission unit, the security or the application protocols.

RPL-> RPL is the routing protocol for Smart Objects Network. With Low Power and Lossy Networks there is strong interest in using several Objetive Functions because deployments greatly vary with different objectives and a single network may support traffic with very different requirements in terms of path quality. We specify how to build a destination oriented directed acyclic graph (DODAG) and there are specified by an objective function.

In contrast with tree topologies, DODAGs offer redundant paths. Thus if the topology permits, there is always more than one path between a leaf and the DODAG root and divide traffic to different paths optimized according to the requirements.

RFC 4443 defines the RPL Control Messages, consisting of an ICMPv6 header followed by a message body. There are 4 types: DIO (DODAG Information Object), DIS (DODAG Information Solicitation), DAO (Destination Advertisement Object) and Secure variants of every type of message.

ISA100->Standard effort from the Instrumentation, Systems, and Automation Society (ISA). ISA100 is a group that standardizes wireless systems for automation.

INTERNET OF THINGS

Smart Object is an item equipped with a form of sensor or actuator, a tiny microprocessor, a communication device and a power source. They can to interact with the physical world by performing limited forms of computation as well as communicate with the outside world and with other smart objects.

The apparition of modern smartphones changed the general view on connectivity because Internet access it’s truly ubiquitous.

Technical challenges for smart objects include the node-level internals of each smart object, such as power consumption and physical size, as well as the network-level mechanisms and structures formed by the smart objects. The design of the network protocols for smart objects must take power consumption into account, when, for example, deciding when and where to send data.

IP->To communicate these smart objects we can use IP. Internet Protocol for Smart Objects (IPSO) Alliance was set up for the purpose of spreading the awareness of the technology around smart objects.

IPv6-> IETF is working in IPv6. It enhances many of the IPv4 functionalities, offers a much larger address pool, and provides better support for security and mobility while preserving the fundamental protocol architecture of IPv4, but the “cost” of migration has slowed down the adoption rate of IPv6.

ROUTING->Networks of smart objects significantly differ from “traditional” IP networks. Routing implies protocols and mechanisms to compute paths in a multi-hop network at layer 3 (IP). With the emergence of multiple types of low-power link layers it became obvious that routing at the network layer is the best option.

TRANSPORT->For smart objects the advantages of TCP are reliability, control of the maximum size of its packets, and interoperability with existing systems. TCP headers are large compared to UDP headers, but header can be compressed. Many smart object networks operate over links where packets can be lost and reliable delivery of data is more important.The TCP MSS option is very useful.

TECHNOLOGY->The hardware of smart objects consists of four main components: Communication device, Microcontroller, Sensors or actuators and Power source

ENERGY MANAGEMENT-> Power optimization must occur both at the hardware and the software level. For radio-equipped smart objects the radio transceiver is the most powerconsuming component.

In a star network, the central node has its radio turned on all the time. All of the other battery-powered nodes keep their radios switched off to conserve energy and only when the nodes have data to send do they switch on their radio to transmit a message. This network is simple and useful, but it constrains the range of the smart object network to that of the physical transmission range of the radio transceivers.

In a mesh network, all nodes can talk to each other and form a robust multi-hop network. The network can be dynamically extended as needed by adding more nodes. The new nodes automatically join the network and act as relay nodes that forward traffic.

Asynchronous low-power listening LPL, X-MAC protocol achieves low-power operation by switching the radio off most of the time and periodically switching it on for a short time. The time during which the radio is on and off is configurable and depends on the predicted traffic load of the network.
Time-synchronized, power-saving protocol TSMP Provide a long lifetime by switching the radio off as often as possible and achieves high reliability by constantly switching the physical radio frequency on which packets are sent. The network is centrally managed so that the entire network is scheduled by a network manager.

COMMUNICATION->Smart object communication patterns can be divided into three categories and they are used in different applications: one-to-one, one-to-many and many-to-one.
Physical Communication Standards: we discuss three different mechanisms for smart objects, two radio transmission mechanisms, IEEE 802.15.4 and IEEE 802.11 low power, and PLC.

The most important difference between the three mechanisms is the range of physical signals.
The technology of the PLC which permits the send of information in the power lines allows to use a big network to communication and it's a great option for the implementation of the smart objects. The wi-fi has the reestriction of the low consum.

APPLICATIONS->The applications are home automation, building automation, container tracking and a lot of more possibilities.



Standard-developing organizations (SDOs) and forums for telecommunications.

In this lecture, Vicente Casares talks about SDO’s and forums for telecommunications. He said that exist a lot of standardization bodies but with good coordination between them and talk about some of them.

IETF

IETF, the Internet Engineering Task Force, has as a mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the Internet. IETF is a large open international community and it try to avoid policy and business questions, as much as possible. The actual technical work of IETF is done in its working groups, organized by topic into several areas. IETF holds meetings three times per year. So, IETF is not a conference or a traditional standards organization or membership organization and no way runs the internet. IETF reject kings, presidents and voting and believes in rough consensus and running code.

IETF has a hierarchy defined. First step of this hierarchy is ISOC (Internet Society) y is an international, non-profit, membership organization that fosters the expansion of the Internet. It is structured in IAOC, IAD, and IASA and only the latest has influence in IETF standards development. Second step is IESG (internet Engineering Steering Group) and it is responsible for technical management of IETF activities and the Internet standards process. It ratifies or corrects the output from the IETF's Working Groups (WGs), gets WGs started and finished, and makes sure that non-WG drafts that are about to become RFCs are correct. There are a lot of Areas to cover and every area as a working group assigned with an Area Director (AD) which together comprise the IESG.

The third step of the hierarchy is IAB (Internet Architecture Board) and is responsible for keeping an eye on the "big picture" of the Internet. Fourth step is IANA (Internet Assigned Numbers Authority) and is the central coordinator for the assignment of unique parameter values for Internet protocols. IANA is the core registrar for the IETF's activities and is the body responsible for the global coordination of some of the key elements that keep the Internet running smoothly. IANA is also responsible for the operation and maintenance of a number of key aspects of the DNS and It coordinates and allocates to Regional Internet Registries.
In the fifth step there is RFC Editor that edits, formats, and publishes Internet-Drafts as RFCs, working in conjunction with the IESG. Once an RFC is published, it is never revised. As well as producing RFCs, the IETF is a forum where network operators, hardware and software implementers, and researchers talk to each other to ensure that future protocols, standards and products will be even better.

The next step of the hierarchy is IETF Secretariat. It is under contract to IASA, which in turn is financially supported by the fees of the face-to-face meetings. The IETF Secretariat provides day-to-day logistical support and is responsible for keeping the official Internet-Drafts directory up to date and orderly, maintaining the IETF web site and helping the IESG do its work.
Finally, the last step is IETF Trust. The reason the IETF Trust was set up is that someone has to hold intellectual property, and that someone should be a stable, legally-identifiable entity.
The IETF is completely open to newcomers and it does not standardize transmission hardware but does standardize all the protocol layers in between, from IP itself up to general applications like email and HTTP.

 IEC, ISO, ITU

IEC, International Electrotechnical Commission, provides a platform to companies, industries and governments for meeting, discussing and developing the International Standards they require. The IEC is the world’s leading organization that prepares and publishes International Standards for all electrical, electronic and related technologies. It is one of three global sister organizations (IEC, ISO, ITU) that develop International Standards for the world.

ISO, International Organization for Standardization, is a non-governmental organization that forms a bridge between the public and private sectors. On the one hand, many of its member institutes are part of the governmental structure of their countries, or are mandated by their government. On the other hand, other members have their roots uniquely in the private sector, having been set up by national partnerships of industry associations. ISO enables a consensus to be reached on solutions that meet both the requirements of business and the broader needs of society.

ITU, International Telecommunication Union, has coordinated the shared global use of the radio spectrum, promoted international cooperation in assigning satellite orbits, worked to improve telecommunication infrastructure in the developing world, established the worldwide standards that foster seamless interconnection of a vast range of communications systems and addressed the global challenges of our times, such as mitigating climate change and strengthening cyber-security. ITU is committed to connecting the world and is composed by different sectors as ITU-T, ITU-R,  ITU-D and ITU-Telecom. One of them is ITU-T (Standarization) which Standards-making efforts are its best-known and oldest activity. ITU-R (Radiocommunications) is responsible for managing the international radio-frequency spectrum and satellite orbit resources. ITU-D (Development) establishes to help spread equitable, sustainable and affordable access to information and communication technologies (ICT). ITU TELECOM brings together the top names from across the ICT industry as well as ministers and regulators and many more for a major exhibition, a high-level forum and a host of other opportunities.

ETSI

ETSI, European Telecommunication Standard Institute, produces globally-applicable standards for Information and Communications Technologies (ICT), including fixed, mobile, radio, converged, broadcast and internet technologies. The following structure has been created to support the activities of the Members of ETSI:  a General Assembly (the highest decision making authority in ETSI), a Board, a Secretariat and Various Technical Bodies.  ETSI's purpose is to produce and perform the maintenance of the technical standards and other deliverables which are required by its members. Much of this work is carried out in committees (Technical Bodies) and working groups composed of technical experts from the Institute's member companies and organizations. For certain urgent items of work, ETSI may also convene a Specialist Task Force (STF) that are small groups of technical experts usually seconded from ETSI members, to work intensively over a period of time, typically a few months, to accelerate the drafting work. ETSI recognizes three types of TB:

– ETSI Technical Committee: A Technical Committee is a semi-permanent entity organized around a number of standardization activities addressing a specific technology area.

– ETSI Project: An ETSI Project is similar to a Technical Committee but is established on the basis of a market sector requirement rather than on a basic technology.

– ETSI Partnership Project: An ETSI Partnership Project is an activity established when there is a need to co-operate with other organizations to achieve a standardization goal and where that co-operation cannot be accommodated within an ETSI Project or Technical Committee.

ETSI may also establish Special Committees (SC) that is a semi-permanent entity organized around a number of standardization activities addressing a specific technology area or related topic and Specification Groups (ISGs) that offers a very quick and easy alternative to the creation of industry forum.
Each TB establishes and maintains a work program consisting of Work Items (WIs).  An ETSI WI is the description of a standardization task, and normally results in a single standard, report, or similar document. The TB approves each WI, which is then formally adopted by the whole membership. A TB takes its decisions, including approval of draft Deliverables, either by simple consensus or by a weighted vote.
3GPP

The original scope of 3GPP, 3rd Generation Partnership Project,  was to produce Technical Specifications and Technical Reports for a 3G Mobile System based on evolved GSM core networks and the radio access technologies that they support (i.e., Universal Terrestrial Radio Access (UTRA) both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) modes). The scope was subsequently amended to include the maintenance and development of the Global System for Mobile communication (GSM) Technical Specifications and Technical Reports including evolved radio access technologies (e.g. General Packet Radio Service (GPRS) and Enhanced Data rates for GSM Evolution (EDGE)). The technical Specification groups are TSG GERAN (GSM EDGE Radio Access Network), TSG RAN (Radio Access Network), TSG SA (Service & Systems Aspects) and TSG CT (Core Network & Terminals).

In America, there are another Standardization Institute named ANSI(American National Standard Institute).



Impact of Internet in the Society and European actions for the develop of Internet

In this class, Josep Solé talks about the impact of internet in the society and the European actions for the internet development.

Actually, the decrease of traditional communications is obviously. A lot of people have noted this in the lasts telephone invoices that they have received.  That’s because the number of calls or sms have decreased.  In contrast, the use of Whatsapp or other applications to talk with everyone in everywhere (instantaneous mail) or social networks are the most successful form to communicate with other people at present. So, it produces an increase in the use of Internet and broadband and also the penetration of laptop computers, smart-phones and other intelligent portable devices.

A few years ago, the use of internet like a principal tool to meet people was unacceptable. But, today is a reality and all young people use it every day to communicate among themselves.  But not only the use of social networks or instantaneous mail have increased, other internet possibilities has an exponential increase in the latest times like blog writers/riders, wiki, microblogging, weblogs, Podcast, RSS, voive over IP, mashups, repositories. In all of this terms, Josep Solé explains us a little bit or what are everyone and show us graphics that proves the high impact of internet in the actual society.

From now on, Josep talks about how broadband will be increasingly ubiquitous.  It’s because the increase rate is higher in the case of mobiles with broadband access. He also said that the increasing presence of internet in society is a first step towards Smart-Cities and how the video traffic will dominate the Internet.
He also explains how organizations will take advantage of this situation. The 89% of companies believe that the private cloud will be the next logical step for organizations that have already implemented virtualization.  Those suppose 3 among 4 companies use cloud for flexibility and cost savings. In addition, publishing companies will use the increase of portable devices like a tool to reach the maximum possible people.

Finally, he explains what European actions are taking place at the moment for the development of Internet.  Lisbon Strategy or the i2010 initiative.  In near future, the support for eGovernment, the increase of bandwidth, the decrease of latency and provide broadband to everyone are the most important objectives to implement.





Blogosfera &Microblogging
In this invited lecture, Joaquín Salvachúa talks in first time about blogs. Blogs are all isomorphic; all of them have the same structure and used a very simple network to run. He also said that blog reader comments establish links between blogs and this links suppose the pre-social network social graph.

Blogsphere
Blogs allows multiple conversations (1 to many) (comments and tracebacks) while web doesn’t. Blogsphere is a blog network that allows multiple conversations ongoing. Spider is not resistant to problems and goes in one direction while Starfish goes to all directions and if one part of his structure goes down, another grows up.

Microblogging
A blog is an online diary that can contains multiple links to many other sites. Microblogging is the same, but in this type of blogs, you can only put less than about 140 characters. Another difference of the typical blogs is that there are asymmetric: you don’t have to follow somebody if he is following you.
 If you want to search  a post, there are the falsonomies (tags that people do) and ontologies (tags that system does) that allow you to see the topics that you are interested.

Wikis
A wiki is a website that can be edited by multiple users. The characteristics of the wiki are an easily notation format, the possibility of modify, create or delete information and conserve the change history. There are a lot of wiki managers available with a wide variety of packages available.

Ward Cunninghan said about wikis that it invites to the users to edit a page or create new pages with a common web navigator and it promotes associations between different pages but it doesn’t look so much neat. To solve this, the contents are refined through small contributions.

There are 4 different types of users in wikis. The adders are the users that add information to the wiki, the synthetizers summarize, organize and consolidate the information. The minimalists are the users that only make minimal changes to the text like orthography or leave comments and multiplexers are responsible of common tasks.

Wikipedia

Wikipedia is a free multilingual encyclopedia based in wiki technology. It is written by multiple people, volunteers, and anybody could modify the information that it contents. It is the eight most visited web place and the biggest wiki in the world. Wikipedia is a newfangled place because is free, collaborative, open to anybody and articles are in constant evolution and fast correction.

Wikipedia has as values that it is an encyclopedia, search the neutral point of view, doesn’t has firm standards and his content is free. This latest value means that anybody can use, apply, copy, distribute or modify in another works but I can wrote what I want. In a doubt case, Wikipedia has the value of use common sense. His content is a world heritage.

Wikipedia had an exponential grew in latest years, coinciding with the internet improvement, and it works because there are more people making than destroying. Anybody can do useful contributions and user’s knowledge complement the information. All this structure is coordinated by Wikipedia Foundation.
If you are a reader, Wikipedia is a useful tool but you should read with critical insight. You can’t believe everything that you read because something could be wrong.  If you are a user, the experience can be rewarding but sometimes stressful because of vandalism or wars of points of view could happen.


New Generation Internet

HTML5

In this class Joan Quemada talk us about HTML5, the new web platform.
In first place, he talks about previous platform, HTML, and how it was born like a tool for realize hypertext documents adding components like CSS or Javascript. That platform was cancel in 1999 for develop xHTML but it hasn't success. Later, in 2004, important companies developed What WG, that continues with the development of a new version of HTML, HTML5.

Then, he talks about HTML5 features. This new platform includes all that new applications needs to be created. The transition to this platform is in progress right now and the definitive rule is estimated to 2022 because it requires two previous implementations. HTML5 includes new tags that provide new web designs and new applications and removed tags that aren’t necessary. It also includes CSS3 that get to the designer new and more possibilities in the web design.  In conclusion HTML5 is perfect to be cool.
In graphic side, HTML5 use CANVAS that defines a bit map. It allows a lot of possibilities like interactive applications, games, 2D and 3D graphics, etc.

In another side, HTML5 includes tools that were developed for xHTML. An example for this is the use of vector graphics in SVG or the MathML formulations.  This second I think is very useful for engineers like us that have to do a lot of calculations. In contrast, the support to this kind of applications is partial. I suppose it’s because there are new tools for HTML5 better than this.

About the storage, HTML5 implements several types of storage safer than the lasts versions and these new solutions don’t consume bandwidth. It gets this big vantage using global and local variables.
HTML5 also provides several communications between client and server. The most popular are web sockets, web messaging, web workers, SSE and XHR2.

Actually, geolocation and Audio-Video streaming has importance especially in mobile phones. That’s one of the reasons because HTML5 includes them. Thinks like listen a song, define where you are or watch a YouTube video in a web page are easier with the predefined objects that HTML5 includes. The webM project proves the impact of these things have in society actually.

At this moment, HTML5 is developing a new API that provides Web Real Time Communications that is called Web RTC. The objectives of this application are explore and explode the device capacities for establish a point to point communication in real time.
In conclusion, HTML5 is the present and the future of web pages. One of the things that approve this conclusion is the 0 cost installation for the client inclusion and his easily incorporation to our web navigators.