Thursday, 19 January 2017

Random Question - PHP

Question: How array_walk function works in PHP?
Purpose: It is used to update the elements/index of original array.
How: in array_walk, two parameter are required. 
1st: original array
2nd: An callback function, with use of we update the array.

How to find duplicate email records in users table?


SELECT u1.first_name, u1.last_name, u1.email FROM users as u1
INNER JOIN (
    SELECT email FROM users GROUP BY email HAVING count(id) > 1
    ) u2 ON u1.email = u2.email;
Question: How to pass data in header while using CURL?
$url='http://www.web-technology-experts-notes.in';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
  'headerKey: HeaderValue ',
  'Content-Type: application/json',
  "HTTP_X_FORWARDED_FOR: xxx.xxx.x.xx"
));
echo curl_exec($ch);
curl_close($ch);



Question: How to pass JSON Data in CURL?
$url='http://www.web-technology-experts-notes.in';
$jsonData='{"name":"Web Technology Experts","email":"contact@web-technology-experts-notes.in"}';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
curl_setopt($ch, CURLOPT_POSTFIELDS, $jsonData);
curl_close($ch);



Question: What is Final Keyword in PHP?
PHP introduces the final keyword, which prevents child classes from overriding a method by prefixing the definition with final keyword.

Question: How can we prevent SQL-injection in PHP? Sanitize the user data before Storing in database.
While displaying the data in browser Convert all applicable characters to HTML entities using htmlentities functions.



Question: How to redirect https to http URL and vice versa in .htaccess?
Redirect https to http
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

Redirect http to https
RewriteEngine on
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,L]
Question: What are benefits of htaccess?
We can do following benefits stuff with htaccess.
Routing the URL
Mange Error Pages for Better SEO
Redirection pages
Detect OS (like Mobile/Laptop/Ios/Android etc)
Set PHP Config variable
Set Environment variable
Allow/Deny visitors by IP Address
Password protection for File/Directory
Optimize Performance of website
Improve Site Security

Question: Difference between Apache and Nginx?
Nginx is based on event-driven architecture.
Apache is based on process-driven architecture

Nginx development started only in 2002. 
Apache initial release was in 1995.

Nginx doesn't create a new process for a new request.
Apache creates a new process for each request.

In Nginx, memory consumption is very low for serving static pages
Apache’s nature of creating new process for each request, that's why it  increases the memory consumption.

Nginx is extremely fast for serving static pages as compare to Apache.

In complex configurations apache can be configured easily as compare to Nginx.

Apache has excellent documentation as compare to Nginx. 

Nginx does not support Operating Systems like OpenVMS and IBMi where as Apache supports much wider range of Operating Systems. 

The performance and scalability of Nginx is not completely dependent on hardware resources.
The Performance and scalability of the Apache is dependent on underlying hardware resources like memory and CPU.

How does HTTPS actually work?

HTTPS is simply your standard HTTP protocol slathered with a generous layer of delicious SSL/TLS encryption goodness. Unless something goes horribly wrong (and it can), it prevents people like the infamous Eve from viewing or modifying the requests that make up your browsing experience; it’s what keeps your passwords, communications and credit card details safe on the wire between your computer and the servers you want to send this data to. Whilst the little green padlock and the letters “https” in your address bar don’t mean that there isn’t still ample rope for both you and the website you are viewing to hang yourselves elsewhere, they do at least help you communicate securely whilst you do so.

1. What is HTTPS and what does it do?

HTTPS takes the well-known and understood HTTP protocol, and simply layers a SSL/TLS (hereafter referred to simply as “SSL”) encryption layer on top of it. Servers and clients still speak exactly the same HTTP to each other, but over a secure SSL connection that encrypts and decrypts their requests and responses. The SSL layer has 2 main purposes:
  • Verifying that you are talking directly to the server that you think you are talking to
  • Ensuring that only the server can read what you send it and only you can read what it sends back
The really, really clever part is that anyone can intercept every single one of the messages you exchange with a server, including the ones where you are agreeing on the key and encryption strategy to use, and still not be able to read any of the actual data you send.

2. How an SSL connection is established

An SSL connection between a client and server is set up by a handshake, the goals of which are:
  • To satisfy the client that it is talking to the right server (and optionally visa versa)
  • For the parties to have agreed on a “cipher suite”, which includes which encryption algorithm they will use to exchange data
  • For the parties to have agreed on any necessary keys for this algorithm
Once the connection is established, both parties can use the agreed algorithm and keys to securely send messages to each other. We will break the handshake up into 3 main phases - Hello, Certificate Exchange and Key Exchange.
  1. Hello - The handshake begins with the client sending a ClientHello message. This contains all the information the server needs in order to connect to the client via SSL, including the various cipher suites and maximum SSL version that it supports. The server responds with a ServerHello, which contains similar information required by the client, including a decision based on the client’s preferences about which cipher suite and version of SSL will be used.
  2. Certificate Exchange - Now that contact has been established, the server has to prove its identity to the client. This is achieved using its SSL certificate, which is a very tiny bit like its passport. An SSL certificate contains various pieces of data, including the name of the owner, the property (eg. domain) it is attached to, the certificate’s public key, the digital signature and information about the certificate’s validity dates. The client checks that it either implicitly trusts the certificate, or that it is verified and trusted by one of several Certificate Authorities (CAs) that it also implicitly trusts. Much more about this shortly. Note that the server is also allowed to require a certificate to prove the client’s identity, but this typically only happens in very sensitive applications.
  3. Key Exchange - The encryption of the actual message data exchanged by the client and server will be done using a symmetric algorithm, the exact nature of which was already agreed during the Hello phase. A symmetric algorithm uses a single key for both encryption and decryption, in contrast to asymmetric algorithms that require a public/private key pair. Both parties need to agree on this single, symmetric key, a process that is accomplished securely using asymmetric encryption and the server’s public/private keys.
The client generates a random key to be used for the main, symmetric algorithm. It encrypts it using an algorithm also agreed upon during the Hello phase, and the server’s public key (found on its SSL certificate). It sends this encrypted key to the server, where it is decrypted using the server’s private key, and the interesting parts of the handshake are complete. The parties are sufficiently happy that they are talking to the right person, and have secretly agreed on a key to symmetrically encrypt the data that they are about to send each other. HTTP requests and responses can now be sent by forming a plaintext message and then encrypting and sending it. The other party is the only one who knows how to decrypt this message, and so Man In The Middle Attackers are unable to read or modify any requests that they may intercept.

3. Certificates

3.1 Trust

At its most basic level, an SSL certificate is simply a text file, and anyone with a text editor can create one. You can in fact trivially create a certificate claiming that you are Google Inc. and that you control the domain gmail.com. If this were the whole story then SSL would be a joke; identity verification would essentially be the client asking the server “are you Google?”, the server replying “er, yeah totally, here’s a piece of paper with ‘I am Google’ written on it” and the client saying “OK great, here’s all my data.” The magic that prevents this farce is in the digital signature, which allows a party to verify that another party’s piece of paper really is legit.
There are 2 sensible reasons why you might trust a certificate:
  • If it’s on a list of certificates that you trust implicitly
  • If it’s able to prove that it is trusted by the controller of one of the certificates on the above list
The first criteria is easy to check. Your browser has a pre-installed list of trusted SSL certificates from Certificate Authorities (CAs) that you can view, add and remove from. These certificates are controlled by a centralised group of (in theory, and generally in practice) extremely secure, reliable and trustworthy organisations like Symantec, Comodo and GoDaddy. If a server presents a certificate from that list then you know you can trust them.
The second criteria is much harder. It’s easy for a server to say “er yeah, my name is er, Microsoft, you trust Symantec and er, they totally trust me, so it’s all cool.” A somewhat smart client might then go and ask Symantec “I’ve got a Microsoft here who say that you trust them, is this true?” But even if Symantec say “yep, we know them, Microsoft are legit”, you still don’t know whether the server claiming to be Microsoft actually is Microsoft or something much worse. This is where digital signatures come in.

3.2 Digital signatures

As already noted, SSL certificates have an associated public/private key pair. The public key is distributed as part of the certificate, and the private key is kept incredibly safely guarded. This pair of asymmetric keys is used in the SSL handshake to exchange a further key for both parties to symmetrically encrypt and decrypt data. The client uses the server’s public key to encrypt the symmetric key and send it securely to the server, and the server uses its private key to decrypt it. Anyone can encrypt using the public key, but only the server can decrypt using the private key.
The opposite is true for a digital signature. A certificate can be “signed” by another authority, whereby the authority effectively goes on record as saying “we have verified that the controller of this certificate also controls the property (domain) listed on the certificate”. In this case the authority uses their private key to (broadly speaking) encrypt the contents of the certificate, and this cipher text is attached to the certificate as its digital signature. Anyone can decrypt this signature using the authority’s public key, and verify that it results in the expected decrypted value. But only the authority can encrypt content using the private key, and so only the authority can actually create a valid signature in the first place.
So if a server comes along claiming to have a certificate for Microsoft.com that is signed by Symantec (or some other CA), your browser doesn’t have to take its word for it. If it is legit, Symantec will have used their (ultra-secret) private key to generate the server’s SSL certificate’s digital signature, and so your browser use can use their (ultra-public) public key to check that this signature is valid. Symantec will have taken steps to ensure the organisation they are signing for really does own Microsoft.com, and so given that your client trusts Symantec, it can be sure that it really is talking to Microsoft Inc.

3.3 Self-signing

Note that all root CA certificates are “self-signed”, meaning that the digital signature is generated using the certificate’s own private key. There’s nothing intrinsically special about a root CA’s certificate - you can generate your own self-signed certificate and use this to sign other certificates if you want. But since your random certificate is not pre-loaded as a CA into any browsers anywhere, none of them will trust you to sign either your own or other certificates. You are effectively saying “er yeah, I’m totally Microsoft, here’s an official certificate of identity issued and signed by myself,” and all properly functioning browsers will throw up a very scary error message in response to your dodgy credentials.
This puts an enormous burden on all browser and OS publishers to trust only squeaky clean root CAs, as these are the organisations that their users end up trusting to vet websites and keep certificates safe. This is not an easy task.

3.4 What are you trusting?

It’s interesting to note that your client is technically not trying to verify whether or not it should trust the party that sent it a certificate, but whether it should trust the public key contained in the certificate. SSL certificates are completely open and public, so any attacker could grab Microsoft’s certificate, intercept a client’s request to Microsoft.com and present the legitimate certificate to it. The client would accept this and happily begin the handshake. However, when the client encrypts the key that will be used for actual data encryption, it will do so using the real Microsoft’s public key from this real certificate. Since the attacker doesn’t have Microsoft’s private key in order to decrypt it, they are now stuck. Even if the handshake is completed, they will still not be able to decrypt the key, and so will not be able to decrypt any of the data that the client sends to them. Order is maintained as long as the attacker doesn’t control a trusted certificate’s private key. If the client is somehow tricked into trusting a certificate and public key whose private key is controlled by an attacker, trouble begins.

4. Really really fun facts

4.1 Can a coffee shop monitor my HTTPS traffic over their network?

Nope. The magic of public-key cryptography means that an attacker can watch every single byte of data exchanged between your client and the server and still have no idea what you are saying to each other beyond roughly how much data you are exchanging. However, your normal HTTP traffic is still very vulnerable on an insecure wi-fi network, and a flimsy website can fall victim to any number of workarounds that somehow trick you into sending HTTPS traffic either over plain HTTP or just to the wrong place completely. For example, even if a login form submits a username/password combo over HTTPS, if the form itself is loaded insecurely over HTTP then an attacker could intercept the form’s HTML on its way to your machine and modify it to send the login details to their own endpoint.

4.2 Can my company monitor my HTTPS traffic over their network?

If you are also using a machine controlled by your company, then yes. Remember that at the root of every chain of trust lies an implicitly trusted CA, and that a list of these authorities is stored in your browser. Your company could use their access to your machine to add their own self-signed certificate to this list of CAs. They could then intercept all of your HTTPS requests, presenting certificates claiming to represent the appropriate website, signed by their fake-CA and therefore unquestioningly trusted by your browser. Since you would be encrypting all of your HTTPS requests using their dodgy certificate’s public key, they could use the corresponding private key to decrypt and inspect (even modify) your request, and then send it onto it’s intended location. They probably don’t. But they could.
Incidentally, this is also how you use a proxy to inspect and modify the otherwise inaccessibleHTTPS requests made by an iPhone app.

4.3 So what happened with Lavabit and the FBI?

Lavabit was Edward Snowden’s super-secure email provider during the NSA leaks insanity of 2013. As we’ve seen, no amount of standard hackery could allow the FBI to see any data on its way between Lavabit and its customers. Without the private key for the Lavabit SSL certificate, the agency was screwed. However, a helpful US judge told the Lavabit founder, Ladar Levison, that he had to hand over this key, effectively giving the FBI free reign to snoop traffic to its heart’s content. Levison made a valiant attempt to stall by handing over the 2,560 character key in 11 hard copy pages of 4-point type, but was slammed with an order requiring him to hand over the key in a useful format or face a $5,000/day fine until he did.
Once he complied, GoDaddy, the Lavabit CA, revoked the certificate, having (correctly) deemed it compromised. This added the Lavabit certificate to a Certificate Revocation List (CRL), a list of discredited certificates that clients should no longer trust to provide a secure connection. Compromised, self-signed or otherwise untrustworthy certificates cause browsers to display a big red error message and to either discourage or outright prohibit further actions by the user. Unfortunately, browsers will continue to trust a broken certificate until they pull the newest updates to the CRL, a process which is apparently imperfect in practice.

5. Conclusion

HTTPS is not unbreakable, and the SSL protocol has to evolve constantly as new attacks against it are discovered and squashed. But it is still an impressively robust way of transmitting secret data without caring who sees your messages. There are of course many implementation details not mentioned here, such as the exact format and order of the handshake messages, abbreviated handshakes to pick up recent sessions without having to renegotiate keys and cipher suites, and the numerous different encryption options available at each stage. The key thing to remember is that whilst HTTPS keeps data safe on the wire to its destination, it in no way protects you (as a user or a developer) against XSS or database leaks or any of the other things-that-go-bump-in-the-night. Be happy that it’s got your back, but stay vigilant. In the immortal words of Will Smith, “Walk in shadow, move in silence, guard against extra-terrestrial violence.”
If you enjoyed this, you’ll probably enjoy my post explaining the details of 2015’s FREAK vulnerability in SSL.

What is the difference between Amazon S3 and Amazon EC2 instance?

An EC2 instance is like a remote computer running Windows or Linux and on which you can install whatever software you want, including a Web server running PHP code and a database server.
Amazon S3 is just a storage service, typically used to store large binary files. Amazon also has other storage and database services, like RDS for relational databases and DynamoDB for NoSQL.
=>Although your title suggests that you are asking about the difference between Amazon S3 and Amazon EC2 instance but in post you said you want to use it for serving your clients/users so I would point that if you want a CDN (Content Delivery Network) then Amazon S3 is not a true CDN. S3 was designed for content storage. The correct Amazon service to use for content delivery is Amazon CloudFront. Rest the answer of your title has been asked. May be it help someone in future.

Wednesday, 18 January 2017

Understanding SOAP and REST Basics And Differences



down voaccepted
REST is almost always going to be faster. The main advantage of SOAP is that it provides a mechanism for services to describe themselves to clients, and to advertise their existence.
REST is much more lightweight and can be implemented using almost any tool, leading to lower bandwidth and shorter learning curve. However, the clients have to know what to send and what to expect.
In general, When you're publishing an API to the outside world that is either complex or likely to change, SOAP will be more useful. Other than that, REST is usually the better option.

Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) are two answers to the same question: how to access Web services. The choice initially may seem easy, but at times it can be surprisingly difficult.
SOAP is a standards-based Web services access protocol that has been around for a while and enjoys all of the benefits of long-term use. Originally developed by Microsoft, SOAP really isn’t as simple as the acronym would suggest.
The Difference between SOAP vs REST APIs
REST is the newcomer to the block. It seeks to fix the problems with SOAP and provide a truly simple method of accessing Web services. However, sometimes SOAP is actually easier to use; sometimes REST has problems of its own. Both techniques have issues to consider when deciding which protocol to use.
Before I go any further, it’s important to clarify that while both SOAP and REST share similarities over the HTTP protocol, SOAP is a more rigid set of messaging patterns than REST. The rules in SOAP are important because without these rules, you can’t achieve any level of standardization. REST as an architecture style does not require processing and is naturally more flexible. Both SOAP and REST rely on well-established rules that everyone has agreed to abide by in the interest of exchanging information.

A Quick Overview of SOAP

SOAP relies exclusively on XML to provide messaging services. Microsoft originally developed SOAP to take the place of older technologies that don’t work well on the Internet such as the Distributed Component Object Model (DCOM) and Common Object Request Broker Architecture (CORBA). These technologies fail because they rely on binary messaging; the XML messaging that SOAP employs works better over the Internet.
After an initial release, Microsoft submitted SOAP to the Internet Engineering Task Force (IETF) where it was standardized. SOAP is designed to support expansion, so it has all sorts of other acronyms and abbreviations associated with it, such as WS-Addressing, WS-Policy, WS-Security, WS-Federation, WS-ReliableMessaging, WS-Coordination, WS-AtomicTransaction, and WS-RemotePortlets. In fact, you can find a whole laundry list of these standards on Web Services Standards.
The point is that SOAP is highly extensible, but you only use the pieces you need for a particular task. For example, when using a public Web service that’s freely available to everyone, you really don’t have much need for WS-Security.
The XML used to make requests and receive responses in SOAP can become extremely complex. In some programming languages, you need to build those requests manually, which becomes problematic because SOAP is intolerant of errors. However, other languages can use shortcuts that SOAP provides; that can help you reduce the effort required to create the request and to parse the response. In fact, when working with .NET languages, you never even see the XML.
Part of the magic is the Web Services Description Language (WSDL). This is another file that’s associated with SOAP. It provides a definition of how the Web service works, so that when you create a reference to it, the IDE can completely automate the process. So, the difficulty of using SOAP depends to a large degree on the language you use.
One of the most important SOAP features is built-in error handling. If there’s a problem with your request, the response contains error information that you can use to fix the problem. Given that you might not own the Web service, this particular feature is extremely important; otherwise you would be left guessing as to why things didn’t work. The error reporting even provides standardized codes so that it’s possible to automate some error handling tasks in your code.
An interesting SOAP feature is that you don’t necessarily have to use it with the HyperText Transfer Protocol (HTTP) transport. There’s an actual specification forusing SOAP over Simple Mail Transfer Protocol (SMTP) and there isn’t any reason you can’t use it over other transports. In fact, developers in some languages, such as Python and PHP, are doing just that.

A Quick Overview of REST

Many developers found SOAP cumbersome and hard to use. For example, working with SOAP in JavaScript means writing a ton of code to perform extremely simple tasks because you must create the required XML structure absolutely every time.
REST provides a lighter weight alternative. Instead of using XML to make a request, REST relies on a simple URL in many cases. In some situations you must provide additional information in special ways, but most Web services using REST rely exclusively on obtaining the needed information using the URL approach. REST can use four different HTTP 1.1 verbs (GET, POST, PUT, and DELETE) to perform tasks.
Unlike SOAP, REST doesn’t have to use XML to provide the response. You can find REST-based Web services that output the data in Command Separated Value (CSV), JavaScript Object Notation (JSON) and Really Simple Syndication (RSS). The point is that you can obtain the output you need in a form that’s easy to parse within the language you need for your application.
As an example of working with REST, you could create a URL for Weather Underground. The API’s documentation page shows an example URL of http://api.wunderground.com/api/Your_Key/conditions/q/CA/San_Francisco.json. The information you receive in return is a JSON formatted document containing the weather for San Francisco. You can use your browser to interact with the Web service, which makes it a lot easier to create the right URL and verify the output you need to parse with your application.

Deciding Between SOAP and REST

Before you spend hours fretting over the choice between SOAP and REST, consider that some Web services support one and some the other. Unless you plan to create your own Web service, the decision of which protocol to use may already be made for you. Extremely few Web services, such as Amazon, support both. The focus of your decision often centers on which Web service best meets your needs, rather than which protocol to use.

Soap Vs Rest

SOAP is definitely the heavyweight choice for Web service access. It provides the following advantages when compared to REST:
  • Language, platform, and transport independent (REST requires use of HTTP)
  • Works well in distributed enterprise environments (REST assumes direct point-to-point communication)
  • Standardized
  • Provides significant pre-build extensibility in the form of the WS* standards
  • Built-in error handling
  • Automation when used with certain language products
REST is easier to use for the most part and is more flexible. It has the following advantages when compared to SOAP:
  • No expensive tools require to interact with the Web service
  • Smaller learning curve
  • Efficient (SOAP uses XML for all messages, REST can use smaller message formats)
  • Fast (no extensive processing required)
  • Closer to other Web technologies in design philosophy

Locating Free Web Services

The best way to discover whether SOAP or REST works best for you is to try a number of free Web services. Rolling your own Web service can be a painful process, so it’s much better to make use of someone else’s hard work. In addition, as you work with these free Web services you may discover that they fulfill a need in your organization, and you can save your organization both time and money by using them. Here are some to check out:
One common concern about using a free Web service is the perception that it could somehow damage your system or network. Web services typically send you text, not scripts, code, or binary data, so the risks are actually quite small.
Of course, there’s also the concern that Web services will disappear overnight. In most cases, these Web services are exceptionally stable and it’s unlikely that any of them will disappear anytime soon. I’ve been using some of them now for five years without any problem. However, stick with Web services from organizations with a large Internet presence. Research the Web service before you begin using it.

Working with the Geocoder Web Service

To make it easier to understand how SOAP and REST compare, I decided to provide examples of both using the same free Web service, geocoder.us (thank you to Mark Yuabov for suggesting it). This simple Web service accepts an address as input and spits out a longitude and latitude as output. You could probably mix it with the Google Maps API example I present in “Using the Google Maps API to Add Cool Stuff to Your Applications.”

Viewing a Simple REST Example

Sometimes, simple is best. In this case, REST is about as simple as it gets because all you need is an URL. Open your browser—it doesn’t matter which one—and type http://rpc.geocoder.us/service/csv?address=1600+Pennsylvania+Ave,+Washington+DC in the address field. Press Enter. You’ll see the output in your browser in CSV format:
GeoCoder REST example
You see the latitude, followed by the longitude, followed by the address you provided. This simple test works for most addresses in most major cities (it doesn’t work too well for rural addresses, but hey, what do you expect for free?). The idea is that you obtain the latitude and longitude needed for use with other Web services. By combining Web services together with a little glue code, you can create really interesting applications that do amazing things in an incredibly short time with little effort on your part. Everyone else is doing the heavy lifting. You can also test your REST API with simple to use tools like SoapUI.

Explaining a Simple SOAP Example

SOAP, by its very nature, requires a little more setup, but I think you’ll be amazed at how simple it is to use.
Begin this example by creating Windows Forms application using Visual Studio. The sample code uses C#, but the same technique works fine with other .NET languages (you’ll need to modify the code to fit). Add labels, textboxes, and buttons as shown here (the Latitude and Longitude fields are read-only).
GeoCoder SOAP example
Here’s where the automation comes into play. Right click References in Solution Explorer and choose Add Service Reference from the context menu. You’ll see the Add Service Reference dialog box. Type the following address into the address field: http://rpc.geocoder.us/dist/eg/clients/GeoCoder.wsdl and click Go. Type GeocoderService in the namespace field. Your dialog box should look like the one shown here.
GeoCoder Web service
Click OK. Visual Studio adds the code needed to work with Geocoder in the background.
At this point, you’re ready to use the Web service. All you need to do is to add some code to the Get Position button as shown here.
private void btnGetPosition_Click(object sender, EventArgs e)
{
   // Create the client.
   GeocoderService.GeoCode_PortTypeClient Client =
      new GeocoderService.GeoCode_PortTypeClient();
   // Make the call.
   GeocoderService.GeocoderResult[] Result =
      Client.geocode(txtAddress.Text);
   // Check for an error result.
   if (Result != null)
   {
      // Display the results on screen.
      txtLatitude.Text = Result[0].lat.ToString();
      txtLongitude.Text = Result[0].@long.ToString();
   }
   else
   {
      // Display an error result.
      txtLatitude.Text = "Error";
      txtLongitude.Text = "Error";
   }
}
The code begins by creating a client. This is a common step for any Web service you use with Visual Studio (or other environments that support SOAP natively). To see another version of the same step, check out the PHP example.
After you create the client, you use it to call one of the methods supported by the Web service. In this case, you call geocode() and pass the address you want to work with. The result of the call is stored in a GeocoderResult variable named Result. A single address could possibly end up providing multiple positions if you aren’t specific enough, so this information is passed back as an array.
Let’s assume that no errors occur (resulting in a null return value). The example assumes that you provided great information, so it places the information found in the first Result entry into the Latitude and Longitude output. So, this example isn’t really that complicated compared with REST, but as you can see, even a simple example is more work.

The Bottom Line: When To Use SOAP Or REST

Some people try to say that one process is better than the other, but this statement is incorrect. Each protocol has definite advantages and equally problematic disadvantages. You need to select between SOAP and REST based on the programming language you use, the environment in which you use it, and the requirements of the application. Sometimes SOAP is a better choice and other times REST is a better choice. In order to avoid problems later, you really do need to chart the advantages and disadvantages of a particular solution in your specific situation.
There’s one absolute you should get from this article. Don’t reinvent the wheel. It’s amazing to see companies spend big bucks to create Web services that already exist (and do a better job than the Web service the company creates). Look for free alternatives whenever possible. In many cases, the choice of Web service also determines your choice of protocol.
Actually there are two. Whether you pick between SOAP or REST for your web service, making sure you thoroughly test your APIs. Ready! API has a full suite of functional, performance, security and virtualization tools for your API testing needs. You can also learn how to test RESTful APIs, in our API Testing Resource Center.

Tuesday, 17 January 2017

OSI Seven Layers Model Explained With Examples

Please Do Not Throw Sushi and Pizza Away. (PDNTSPA).



OSI Model Seven Layers

OSI Layers model has seven layers; Application, Presentation, Session, Transport, Network, data link and physical.

Application Layer

Application layer provides platform to send and receive data over the network. All applications and utilities that communicate with network fall in this layer. For examples
Browsers :- Mozilla Firefox, Internet Explorer, Google Chrome etc
Email clients: - Outlook Express, Mozilla Thunderbird etc.
FTP clients :- Filezilla, sFTP, vsFTP
Application layer protocols that we should know for exam are following:
SNMP (Simple Network Management Protocol) — Used to control the connected networking devices.
TFTP (Trivial File Transfer Protocol) — Used to transfer the files rapidly.
DNS (Domain Naming System) — Used to translate the name with IP address and vice versa.
DHCP (Dynamic Host Configuration Protocol) — Used to assign IP address and DNS information automatically to hosts.
Telnet— used to connect remote devices.
HTTP (Hypertext Transfer Protocol) — Used to browse web pages.
FTP (File Transfer Protocol) — Used to reliably sends/retrieves files.
SMTP (Simple Mail Transfer Protocol) — Used to sends email.
POP3 (Post Office Protocol v.3) — Used to retrieves email.
NTP (Network Time Protocol) — Used to synchronizes clocks.

Presentation layer



Presentation layer prepares the data. It takes data from application layer and marks it with formatting code such as .doc, .jpg, .txt, .avi etc. These file extensions make it easy to realize that particular file is formatted with particular type of application. With formatting presentation layer also deals with compression and encapsulation. It compresses (on sending computer) and decompresses (on receiving computer) the data file. This layer can also encapsulate the data, but it’s uncommon as this can be done by lower layers more effectively.
The Session Layer
Session layer deals with connections. It establishes, manages, and terminates sessions between two communicating nodes. This layer provides its services to the presentation layer. Session layer also synchronizes dialogue between the presentation layers of the two hosts and manages their data exchange. For example, web servers may have many users communicating with server at a given time. Therefore, keeping track of which user communicates on which path is important and session layer handle this responsibility accurately.

Transport Layer

So far CCNA exam is concern; this is the most important layer to study. I suggest you to pay extra attentions on this layer, as it is heavily tested in exam.
Transport layer provides following services: -
  • It sets up and maintains the connection between two devices.
  • It multiplexes connections that allow multiple applications to simultaneously send and receive data.
  • According to requirement data transmission method can be connection oriented or connection less.
  • For unreliable data delivery connection less method is used.
  • Connection less method uses UDP protocol.
  • For reliable data delivery connection oriented method is used.
  • Connection oriented method uses TCP protocol.
  • When Implemented a reliable connection, sequence numbers and acknowledgments (ACKs) are used.
  • Reliable connection controls flow through the uses of windowing or acknowledgements.
For exam purpose remember five main functions of transport layer.
  1. Segmentation
  2. Connection management
  3. Reliable and unreliable data delivery
  4. Flow control
  5. Connection multiplexing
Let’s understand these functions in more depth

Segmentation

Segmentation is the process of breaking large data file into smaller files that can be accommodated by network. To understand this process thinks about a 700 MB movie that you want to download from internet. You have 2MBPS internet connection. How will you download a 700MB movie on 2MBPS internet connection?
In this case segmentation process is used. On server transport layer breaks 700MB movie in smaller size of segments (less than your internet connection speed). Assume that 700Mb movie is divided in 700 segments. Each segment has file size of 1Mb that your PC can easily download at current connection speed. Now your PC will download 700 small files instead of one large file. So next time when you see download progress bar in browser, think it about segment receiver progress bar. Once your browser receives all segments from server, it will pop up a message indicating download is completed. Transport layer at your PC will merge all segments back in a single 700Mb movie file. End user will never know how a 700Mb movie makes its way through the 2Mbps connection line.

Connection management

Transport layer setup, maintain and tear down connections for session layer. Actual mechanic of connection is controlled by transport layer. Transport layer use two protocols for connection management UDP and TCP.

UDP

UDP is a connection less protocol. Connection-less transmission is said to be unreliable. Now, don't get worried about the term "unreliable" this doesn't mean that the data isn't going to get its destination; its only means that it isn't guaranteed to get its destination. Think of your options when you are sending a postcard, put it in the mailbox, and chances are good that it will get where it's supposed to go but there is no guarantee. There is always a chance of missing in the way. On the other hand, it's cheap.

TCP

TCP is a connection oriented protocol. Connection-oriented transmission is said to be reliable. Think TCP as registry AD facility available in Indian post office. For this level of service, you have to buy extra ticket and put a bunch of extra labels on it to track where it is going and where it has been. You get a receipt when it is delivered. In this method you have a guaranteed delivery. All of this costs you more—but it is reliable!

Reliability

Reliability means guaranteed data delivery. To insure delivery of each single segment, connection oriented method is used. In this approach before sending any segments three way handshake process is done.
Three way handshake process
tcp three way handshake process
  1. PC1 sends a SYN single to PC2 indicating that it wants to establish a reliable session.
  2. P2 replies with ACK/SYN signal where ACK is the acknowledgment of PC1’s SYN signal and SYN indicates that PC2 is ready to establish a reliable session.
  3. PC1 replies with ACK signal indicating that is has received SYN signal and session is now fully established.
Once connection is established data transmission will be initiated. To provide maximum reliability it includes following functions:-
  • Detect lost packets and resend them
  • Detect packets that arrived out of order and reorder them
  • Recognize duplicate packets and drop extra packets
  • Avoid congestion by implementing flow control
Flow control
The transport layer implements two flow control methods:
  • Ready/not ready signals
  • Windowing

Ready / not ready signals method

In this method sender sends data according to its buffer size. Receiver receives data in its buffer. When receivers buffer get filled, it send a not ready signal to sender, so sender can stop transmitting more segments. Receivers send ready signal when it becomes ready to receive next segments. This method has two problems.
  • First, the receiver may respond to the sender with a not ready signal only when its buffer fills up. While this message is on its way to the sender, the sender is still sending segments to the receiver, which the receiver will have to drop because its buffer space is full.
  • The second problem with the uses of this method is that once the receiver is ready to receive more segments, it must first send a ready signal to the sender, which must be received before sender can send more segments.

Windowing

In windowing a window size is defined between sender and receiver. Sender host will wait for an acknowledgement signal after sending the segments equal to the window size. If any packet lost in the way, receiver will respond with acknowledgement for lost packet. Sender will send lost packet again. Window size is automatically set during the three step handshake process. It can be adjust anytime throughout the lifetime of connection.

Connection Multiplexing/Application Mapping

Connection multiplexing feature allows multiple applications to connect at a time. For example a server performs a number of functions like email, FTP, DNS, Web service, file service, data service etc. Suppose server has a single IP address, how will it perform all these different functions for all the hosts that want to connect with it? To make this possible transport layer assigns a unique set of numbers for each connection. These numbers are called port or socket numbers. These port numbers allow multiple applications to send and receive data simultaneously.
Port numbers are divided into following ranges by the IANA
Port numberDescriptions
0–1023Well-Known—For common TCP/IP functions and applications
1024–49151Registered—For applications built by companies
49152–65535Dynamic/Private—For dynamic connections or unregistered applications


Common TCP and UDP Port Numbers
TCPUDP
FTP20, 21DNS53
Telnet23DHCP67,68
SMTP25TFTP69
DNS53NTP123
HTTP80SNMP161
POP110
NNTP119
HTTPS443

Network Layer

Network layer is responsible for providing logical address known as IP address. Router works on this layer. Main functions of this layer are following:-
  • Define IP address
  • Find routes based on IP address to reach its destination
  • Connect different data link type together like as Token Ring, Serial, FDDI, Ethernet etc.

IP address

IP address a 32 bit long software address which made from two components:
Network component: - Defines network segment of device.
Host component :- Defines the specific device on a particular network segment
Subnet mask is used to distinguish between network component and host component.
IP addresses are divided in five classes.
  • Class A addresses range from 1-126.
  • Class B addresses range from 128-191.
  • Class C addresses range from 192-223.
  • Class D addresses range from 224-239.
  • Class E addresses range from 240-254.
Following addresses have special purpose: -
0 [Zero] is reserved and represents all IP addresses;
127 is a reserved address and it is used for testing, like a loop back on an interface:
255 is a reserved address and it is used for broadcasting purposes.

IP packet

Network layer receive segment from transport layer and wrap it with IP header that is known as datagram.

Datagram

Datagram is just another name of packet. Network layer use datagram to transfer information between nodes.
Two types of packets are used at the Network layer: data and route updates.
Data packets
Data packets are used to transport the user data across the network. Protocols used by data packets are known as routed protocol. For example IP and IPv6
Route update packets
These packets are used to update the route information within internetwork. Routers use these packets. Protocols that send route update packets are called routing protocols; for example RIP, RIPv2, EIGRP, and OSPF

Data link layer

Main functions of data link layer are
  • Defining the Media Access Control (MAC) or hardware addresses
  • Defining the physical or hardware topology for connections
  • Defining how the network layer protocol is encapsulated in the data link layer frame
  • Providing both connectionless and connection-oriented services
  • Defines hardware (MAC) addresses as well as the communication process that occurs within a media.

MAC Address

MAC address is a 48 bit long layer two address. It is also known as hardware address. This address is burnt with device by manufacturing company.
The first six hexadecimal digits of a MAC address represent its manufacture company.
MAC addresses only need to be unique in a broadcast domain.
You can have the same MAC address in different broadcast domains.

Frame

Data link layer receive packet from network layer and wrap it with layer two Header that is known as frame. There are two specifications of Ethernet frame.
  1. Ethernet II
  2. 802
Key points to remember:-
  • Ethernet II does not have any sub layers, while IEEE 802.2/3 has two: LLC and MAC.
  • Ethernet II has a type field instead of a length field (used in 802.3).
  • 802.2 use a SAP or SNAP field to differentiate between encapsulated layer-3 payloads.
  • With a SNAP frame, the SAP fields are set to 0xAA and the type field is used to indicate the layer-3 protocol.
  • 802.2 SAP frame is eight bits in length and only the first six bits are used for identifying upper-layer protocols, which allows up to 64 protocols.
  • 802.2 SNAP frame supports up to 65,536 protocols.

Physical Layer

Physical layer deals with communication media. This layer receive frame from data link layer and convert them in bits. It loads these bits on actual communication media. Depending on media type these bit values are converted in single. Some use audio tones, while others utilize state transitions—changes in voltage from high to low and low to high.

Protocol data unit

Piece of data passed between layers collectively known as PDU (protocol data unit). Layers have different terms to describe it like (segment in transport layer, packet in network layer, frame at data link layer, and signal at physical layer.)
PDU include data file and a consistent body of information attached onto data at each successive layer. This information is called header and footer. It includes instructions on how to restore the file to its original state when it receives to the target system.
As a PDU passes through the layers, a header (and footer only on data link layer) is added to the packet with information to the peer layer on the destination system for reconstructing the data on its way back up through the layers of the destination network.

Data Exchange Process

In data exchange process, participating computers work in reverse mode. Layers on receiving computer perform the same task in reverse mode.
The receiving device takes delivery of, handles, and translates the data from the sending device at a particular layer. For example on sending computer presentation layer compress the data, same presentation layer on receiving computer decompress the data.

On sending computer

  • Sending application access the application layer.
  • Application provides data to the presentation layer.
  • Presentation layer format the data as per network requirement and forward it's to session layer.
  • Session layer initiate the connection and forward the data to the transport layer.
  • Transport layer broke down the large data file in smaller segments and add a header with control information, which are bits designated to describe how to determine whether the data is complete, uncorrupted, in the correct sequence, and so forth.
  • Segments are forwarded to the network layer. Network layer add its header, with logical address and convert it in packet. Network layer forwards packet to data link layer.
  • Data link layer attach its header and footer to the packet and convert it in frame.
  • Frames are forwarded to the physical layers that convert them in signals. These signals are loaded in media.

On receiving computer

  • Physical layer receive signals from media and convert them in frames. Frames are forwarded to the data link layer.
  • Data link layer check the frame. All tampered frame are dropped here. If frame is correct, data link layer strip down its header and footer from frame and hand over packet to network layer.
  • Network layer check the packet with its own implementations. If it's found everything fine with packet, it strips down its header from packet and hand over segment to transport layer.
  • Transport layer again do the same job. It verifies the segments with its own protocol rules. Only the verified segments are processed. Transport layer remove its header from verified segments and reassemble the segments in data. Data is handed over the session layer.
  • Session layer keep track of open connection and forwarded the receiving data to presentation layer.
  • Presentation form the data in such a way that application layer use it.
  • Application layer on receiving computer find the appropriate application from the computer and open data within particular application.

In nutshell

At the sending device, each layer breaks the data down into smaller packets and adds its own header.
At the receiving device, each layer strips off the header and builds the data packets into larger packets.
Each protocol layer is blind to the headers of any other protocol layer and cannot process them.

TCP/IP Reference Model

TCP/IP protocol model is another popular layer model that describes network standards. For CCNA exam you should be aware about this model as well. This model has same names of layers as OSI reference model has. Don't be confuse with same name, layers at both model have different functionality in each model.
Let's see how TCP/IP model is different from OSI reference model

Application layer:

TCP/IP model combine the functionality of application layer, presentation layer and session layer from OSI model in single application layer. In TCP/IP model application layer do all tasks those are performed by upper layers in OSI model. Application layer deals with high level protocols, including data presentation, compression and dialog control.

Transport layer:

In TCP/IP model transport layer provides quality of services. TCP protocol is used for reliable data delivery. Flow control and error correction methods are used for guaranteed data delivery.

Internet layer:

In TCP/IP model Internet layer provide all the functionality that network layer provides in OSI model. Internet layer is responsible for finding the correct path for datagram [packet].

Network access layer:

Name of this layer may confuse you as OSI model has a layer of same name. In TCP/IP model network access layer deals with LAN and WAN protocols and all the functionality provided by physical and data link layer in OSI model.

Cisco's three-layer hierarchical model

Cisco's three layer hierarchical model is a set of networking specification provided by Cisco. This model describe which cisco device works on which layers.

Core Layer

High-speed layer-2 switching infrastructure works in this layer.

Distribution Layer

Distribution layer stands between access and core layers. Router and layer 3 switch works in this layer.

Access Layer

This layer provides user's initial access to the network via switches or hubs.
That’s all for this article. In next article I will explain another CCNA topic.