| ]

In today’s blog we are going to take a look at QoS on the Catalyst 3560 platform – the only switch we need to be concerned with in the CCIE R&S lab. QoS on the 3560 is quite an elaborate topic. This article is not designed to teach you every possible command and option there is to know, but is designed rather to take a look at the most important aspects that are important to understand from the perspective of a CCIE R&S candidate. The ultimate resource for Catalyst 3560 QoS information is of course the 3560 software configuration guide. We will first look at the general QoS model, and then take some time to break apart each section. I won’t lie to you guys — 3560 QoS is an intense topic for most people. This blog will be lengthy, but I will try to keep it to the most important things. 10 or 11 pages is certainly probably more attractive than picking it out of the 1300 page configuration guide yourself : ) Hopefully after reading this blog you will have a much clearer understanding of your options when dealing with 3560 QoS.

QoS Actions For Ingress Traffic

When traffic arrives on a switchport, there is a set of different QoS functions we have the ability to apply.
• Classification
• Policing
• Marking
• Queueing
• Scheduling

Most of these functions are specific for ingress traffic. A few of the options, namely queueing and scheduling, we can also do for egress traffic. One important thing to remember with the 3560 is that when we talk about queueing or scheduling, we could be talking about it from the perspective of inbound traffic (ingress) or outbound traffic (egress). The queueing and scheduling is very similar, but with ingress queueing we have two queues, whereas with egress queueing we have four. In both situations, one queue can be configured as a strict priority queue. In this article we will take a look at Classification, Policing and Marking. Additionally, we will look at egress Queueing and Scheduling.

Classification

The entire point of QoS is to give prefferential treatment to some packets that are deemed “more important” than other packets during times of congestion. If we want to do that, we need a way to figure out what packets are more important. That is the basis of classification. There are multiple different ways to do classification on the 3560, depending on what type of environment you are working in and what it is you are trying to accomplish. We may classify traffic at the interface level through a variety of methods we will discuss below, or at the VLAN level. If you choose to do classification at the VLAN level you will need the mls qos vlan-based command on interfaces that are part of the VLAN you want to do classification on. If we are talking about non-IP traffic we can classify based on trusting the incoming CoS values or based on MAC ACLs. Recall that CoS is a layer 2 thing, so it makes sense that we can classify non-IP traffic with CoS. CoS bits get set in the L2 header, and really have nothing to do with IP. With IP traffic however, we can classify based on trusting incoming CoS, DSCP, IP Precendence values or with L3 ACLs. This also makes sense if you recall that DSCP and IP Precedence are marked in the ToS byte of the IP header. If you attempt to trust DSCP or IP Precendence for non IP traffic, one of two things will happen: If the frame has a CoS value, it will be retained. If not, the default port CoS value will be used. If you choose to classify with ACLs instead of trusting QoS markings, ultimately you will be configuring an ACL, calling the ACL in a class-map, and calling the class-map from a policy-map which you will apply to the interface. You can also configure VLAN based classification by applying a policy-map that does classification and marking to an SVI. Generally speaking, if you choose to not trust anything, and you don’t have any ACLs configured, things will be marked down to best-effort. If a packet is unmarked when it arrives, it will be assigned the default CoS value assigned to the interface, which is 0. This of course, is something we can change with the mls qos cos command.
Understanding trust states as well as the internal mapping table logic of your 3560 switch is probably one of the most important things to know about. It is also one of the most important things to know how to manipulate.
First of all, in order to turn on QoS processing on the switch, we need to enable it with the mls qos command. Secondly, we need to understand our options. As stated previously, you can configure your switch ports to trust certain QoS markings, or not trust QoS markings. By default, nothing is trusted. To trust the markings you will use a variant of the mls qos trust command as shown below. The device option gives us the ability to essentially say “ONLY trust the incoming QoS markings IF the device connected to this port is a Cisco phone.” This allows us to trust an IP phone’s QoS markings for voice packets while also protecting against a user unplugging his PC from the phone and running his cable directly into the switch and getting priority treatment of all packets.
3560(config)#mls qos
3560(config)#int fa0/1
3560(config-if)#mls qos trust ?
cos            cos keyword
device         trusted device class
dscp           dscp keyword
ip-precedence  ip-precedence keyword
So far we have seen that when a frame arrives at the switch port we can either trust the markings or not trust the markings. If we don’t trust the markings or if there is no marking we will get a CoS value of 0 by default. However, if we DO decide to trust the markings, we move on to the next step — mappings. You see, the 3560 has a variety of internal mapping tables that decide what the final markings are going to be, which are based on the incoming markings. For example, we have CoS-To-DSCP,and DSCP-To-CoS among others. The idea is that when a frame comes in, the switch can look at the incoming QoS markings, and from those incoming markings derive a new QoS marking which will ultimately determine how that traffic gets queued. Let’s have a look at some examples.
3560#sh mls qos maps ?
cos-dscp       cos-dscp map keyword
cos-input-q    cos-input queue map keyword
cos-output-q   cos-output queue map keyword
dscp-cos       dscp-cos map keyword
dscp-input-q   dscp-input queue  map keyword
dscp-mutation  dscp-mutation map keyword
dscp-output-q  dscp-output queue map keyword
ip-prec-dscp   ip-prec-dscp map keyword
policed-dscp   policed-dscp map keyword
|              Output modifiers

3560#sh mls qos maps cos-dscp
Cos-dscp map:
cos:   0  1  2  3  4  5  6  7
--------------------------------
dscp:   0  8 16 24 32 40 48 56

3560#sh mls qos maps dscp-cos
Dscp-cos map:
d1 :  d2 0  1  2  3  4  5  6  7  8  9
---------------------------------------
0 :    00 00 00 00 00 00 00 00 01 01
1 :    01 01 01 01 01 01 02 02 02 02
2 :    02 02 02 02 03 03 03 03 03 03
3 :    03 03 04 04 04 04 04 04 04 04
4 :    05 05 05 05 05 05 05 05 06 06
5 :    06 06 06 06 06 06 07 07 07 07
6 :    07 07 07 07

3560#sh mls qos maps cos-output-q
Cos-outputq-threshold map:
cos:  0   1   2   3   4   5   6   7
------------------------------------
queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
In the examples above we are looking at three different tables: Cos-To-DSCP and DSCP-To-Cos as well as the CoS to output queue mapping. The idea is this — A frame comes in with some QoS marking (Let’s call it CoS for now) that we happen to trust on the port. The switch then looks at the CoS-To-DSCP mapping and derives a DSCP value to give to the packet. The switch then consults the DSCP-To-CoS map to derive a new CoS value. Finally, based on that CoS value, it chooses an output queue. For example, we can see that CoS 2 gets marked with DSCP 16 and that DSCP 16 gets marked with CoS 2. Let’s say we are trusting CoS and a frame comes in with a CoS of 3. We see that the CoS-To-DSCP map will map CoS 3 to DSCP 24. The DSCP-To-CoS map is then consulted, and we see the CoS value will end up remaining at 3. Finally, we see that CoS 3 gets mapped to output queue #3 with a drop threshold of 1. Keep in mind that this is one particular example of one particular function. If we decided to trust DSCP or IPP instead of CoS, we would consult different tables but the general idea is the same. If we are trusting DSCP instead of CoS on the interface, we can let that DSCP value remain intact and pass through, or we can change it with a DSCP mutation map.
Understanding this process is important for a CCIE candidate. What is equally as important is to know how to modify these default mappings. The key thing to remember is “mls qos map”. From there you can do mostly anything. Let’s go back to our example. Let’s say we are trusting CoS and we want to make sure that incoming frames marked as CoS 3 get mapped to DSCP 28 instead of DSCP 24.
3560(config)#mls qos map cos-dscp ?
  CoS values separated by spaces (up to 8 values total)

3560(config)#mls qos map cos-dscp 0 8 16 28 32 40 48 56
3560(config)#do sh mls qos map cos-dscp
Cos-dscp map:
cos:   0  1  2  3  4  5  6  7
--------------------------------
dscp:   0  8 16 28 32 40 48 56
Now suppose that we need to make sure DSCP 28 gets marked back to CoS 5 so that it is sent to priority queue #1 outbound.
If you wish to modify the default CoS/DSCP to output queue mappings you can use the mls qos srr-queue output command.

Policing And Marking

After arriving traffic has been classified, things can move on to the next step if necessary, which is policing and marking. Policing, much like on routers, is for rate-limiting the bandwidth of a particular type of traffic. On the 3560 switch, we can police on an individual interface, or even on an SVI by using hierarchical policies. When we police traffic, we set bandwidth limits on that traffic, and decide what to do about it if things are misbehaving. Namely, we can pass the traffic, drop the traffic, or mark down the QoS markings of the packets. This is all decided on a per-packet basis. After being policed, and potentially marked down THEN we can move into queueing.
One thing to understand about policing on the 3560 switch, is that it works a bit differently than other types of policing or shaping you may be familiar with. Particuarly, people seem to get hung up on why everything they learned about with regards to frame-relay traffic-shaping or shaping in general no longer applies here. Well, for one thing we are talking about policing traffic and not shaping it. Shaping implies some sort of queueing in order to shape the traffic to a given rate over time. Policing on the other hand does not queue traffic. The old well known formula for FRTS Tc = Bc/CIR is not really appropriate for policing on a 3560 switch. The policing IS done using a token bucket type algorithm, but it is much different than the token bucket algorithm used for FRTS. With FRTS things are based on how much data you can send per time slice (Tc). With 3560 policing there is really no concept of hard set time slices. The amount of data you can send at any given moment is more determined by time and the configured policed rate / burst size than anything else. Letting formulas used for traffic-shaping confuse you when doing policing is like comparing apples and oranges. If you don’t allow yourself to confuse the two different but similar technologies, you should be OK.

Policing Physical Ports

Once you decide to police a physical port as opposed to an SVI, you stil have options : ) You can police an individual port based on a policy linked to a single class of traffic, or you can police an individual port based on a policy that is linked to multiple classes (aggregate policer.) The individual port policer is pretty simple. The aggregate policer just allows you to police a SET of classes all together. For example, you can configure multiple class-maps and police the rate of the sum of all the class maps to a certain rate. Let’s have a look at the syntax for the policer on a 3560 switch. This is something you would apply inside a policy-map.
police rate-bps burst-byte [exceed-action {drop | policed-dscp-transmit}]
The rate specified in bits per second tells us what the average rate is that we wish to police to. This is not necessarily a hard limit because we also have the ability to configure a burst size. The burst size allows the policed traffic to temporarily burst above and beyond the average rate. Keep in mind that the average rate is in terms of bits per second while the burst is in bytes. Notice I didn’t say bytes per second. The burst value specified is essentially defining how DEEP the token bucket is. It has nothing to do with how many bits or bytes PER anything. It is simply the size of the bucket in bytes. This parameter works in conjunction with how fast the bucket leaks (the policed rate) to determine if a packet comforms or does not conform. If the traffic conforms, it is transmitted. If the packet does not conform, we can either drop it or mark down the QoS based on the policed-dscp map. Many people learning this technology tend to get hung up on what values to use for burst. Part of this stems from the idea that when we conigure policing on a router using CAR, we are often told to use the formula Bc = [(CIR / 8) * 1.5] for the Bc burst value (as we are told in the command reference as recommended values). There is no such recommendation for burst values on the 3560. In short, do what you are told in the lab. If you are not given a burst value, I would not recommend you stress too much about what values to put here.
Let’s look at an example of a typical port-based policer on the 3560. Let’s say we want to police FTP traffic on port 19 to an average rate of 20Mbps with a burst of 10KB. We would configure something like this:
ip access-list extended FTP-TRAFFIC
permit tcp any any range ftp-data ftp
!
class-map match-all FTP-TRAFFIC
match access-group name FTP-TRAFFIC
!
policy-map POLICE-FTP
class FTP-TRAFFIC
police 20000000 10000 exceed-action drop
!
interface FastEthernet0/19
service-policy input POLICE-FTP
Now, let’s take a look at an aggregate policer. The idea here is this — We have a policy-map that will match multiple classes of traffic. If the rate of all the classes COMBINED goes over a certain level, we do something about that. In this case, let’s say we are matching HTTP, Telnet, and SSH traffic and we don’t want all that traffic combined to exceed 10 Mb/s with a burst of 50KB on interface fa0/1
mls qos aggregate-police WEB-TELNET-SSH 10000000 50000 exceed-action drop
!
ip access-list extended HTTP-TRAFFIC
permit tcp any any eq 80
!
ip access-list extended TELNET-TRAFFIC
permit tcp any any eq 23
!
ip access-list extended SSH-TRAFFIC
permit tcp any any eq 22
!
!
class-map HTTP
match access-group HTTP-TRAFFIC
!
class-map TELNET
match access-group TELNET-TRAFFIC
!
class-map SSH
match access-group SSH-TRAFFIC
!
!
policy-map AGGREGATE-POLICER
class HTTP
police aggregate WEB-TELNET-SSH
!
class TELNET
police aggregate WEB-TELNET-SSH
!
class SSH
police aggregate WEB-TELNET-SSH
!
!
interface FastEthernet0/1
service-policy input AGGREGATE-POLICER

Policing On SVIs

Policing at the SVI level can be a little more confusing at first. The reason it is more confusing is because it requires hierarchical policies. You cannot apply a policy-map that does policing directly to an SVI. This is similar to how you cannot apply queueing on a router to an ethernet sub-interface. You must configure a more general VLAN based “parent” policy and then call your more specific interface based “child” policy from inside the parent VLAN policy. For example, let’s say we want to police HTTP traffic inbound to VLAN 80 to 1Mb with a 20KB burst size. The interfaces in VLAN 80 we want to apply this to are Fa0/1 – Fa0/5. It would look something like this
! Apply VLAN-Based QoS to participating ports
!
interface range FastEthernet0/1 - 5
mls qos vlan-based
!
ip access-list extended HTTP
permit tcp any any eq 80
permit tcp any eq 80 any
!
class-map HTTP
match access-group name HTTP
!
class-map POLICED-PORTS
match interface FastEthernet 0/1 - FastEthernet 0/5
!
! Child policy to police the interfaces of VLAN 80
!
policy-map INTERFACE-POLICY
class POLICED-PORTS
police 1000000 20000
!
! Parent policy to apply to VLAN 80 in general
!
policy-map VLAN-POLICY
class HTTP
service-policy INTERFACE-POLICY
!
interface vlan 80
service-policy input VLAN-POLICY

Marking

The good news is that at this point, marking has really already been discussed. Marking can be done in a few ways. We have already seen how traffic can be marked through the classification process. If we trust markings on the port then incoming marked traffic can be re-marked either through the internal switch mappings, passed through in the case of DSCP, or remarked through a policy. If we don’t trust markings, we can still re-mark the traffic with a policy, or allow it to be marked down as best effort. If the traffic is not marked in the first place, we can choose to mark it ourselves, or assign it the default interface CoS.

Queueing And Scheduling

As I mentioned earlier, the 3560 has two input queues per interface with queue 2 being the default priority queue. The 3560 also has four output queues per interface, with queue 1 being the priority queue. The priority queue is not something that happens automagically, and must be enabled. To enable the input prioriity queue you will use the global command mls qos srr-queue input priority-queue. To enable the output priority queue you will use the interface level command priority-queue out.
When we are talking about Queueing on the 3560 there is a concept known as WTD or weighted tail drop that applies to both input and output queues. To put it simply, each queue has three different drop thresholds that correspond to different CoS values. Frames that are “less important” than others can be configured to be dropped at lower levels of congestion than “more important” frames. For example, maybe CoS values 5,6 and 7 are considered very important in your network, but CoS 1-4 are less important. Your WTD configuration would probably have CoS 1-4 mapped to the thresholds that get dropped sooner than CoS 5-7. Maybe CoS 1-4 gets dropped at 60% congestion, whereas CoS 5-7 only gets dropped when it absolutely HAS to at 100% congestion. In the grand scheme of things for routing and switching I would say understand the basic concept and know where to find the information if you have to.
Now, when we talk about how the queues actually get serviced and how much they get serviced (who gets better treatment by getting more attention), we are getting into SRR or shaped round robin. This happens again for both input and output queues. We will focus on egress queueing here. As usual, there are multiple options. Namely, we have shaped mode and shared mode. In short, with shaped mode each queue is guaranteed a certain amount of bandwidth, but also policed to that level. If the other queues are not filled, the extra bandwidth is not utilized. With shared mode as the name implies, the bandwidth among the queues is shared according to configured weights, but is not policed. There is a significant difference in understanding the syntax between the two modes.
With shaped mode we have the following interface command:
srr-queue bandwidth shape
In this case the weight values are talking about a specific amount of bandwidth guaranteed for that queue. Weight 1 is for queue 1, weight 2 for queue 2 , etc…The numbers that you enter are the denominator portion of a fraction. For example:
srr-queue bandwidth shape 4 4 4 4
srr-queue bandwidth shape 8 0 0 0
First of all, these are two different examples. In the first line, we are saying “each queue will get 1/4 of the bandwidth of this interface guaranteed.” In the second example, we are saying “Queue 1 will get 1/8 of the bandwidth guaranteed, and all other queues operate in shared mode”. When we use the 0 for the other three queues we tell them to operate in shared mode. So in the case of the second example, queues 2-4 would shared the remaining 87.5% equally.
With shared mode we have the following command:
srr-queue bandwidth share
In the case of shared mode, the numbers have different meanings. Now we are looking at a ratio or how the queues relate to each other. The values themselves have no real meaning. The only thing that matters in shared mode is the ratio of the queues. For example, the following three lines accomplish the exact same thing — They each would allocate 25% of the interface bandwidth to each queue because the ratio between the queues is 1:1.
srr-queue bandwidth share 1 1 1 1
srr-queue bandwidth share 25 25 25 25
srr-queue bandwidth share 100 100 100 100
Let’s try something a bit diferent:
srr-queue bandwidth share 10 20 30 40
With the last example, queue 4 gets 4x as much bandwidth as queue 1 because it has a ratio of 4:1 with queue 1. Queue 4 gets twice as much bandwidth as queue 2, and 1 1/3x bandwidth as queue 3.
Hopefully, this article has been of help to everbody out there learning QoS on the Catalyst 3560. I hope that his has given you some valuable insite into the many different options and capabilities of this switch. As you can see, the 3560 is a very powerful device! As always, I would highly recommend and encourage you guys to do more research, labbing and reading on your own to master this topic. The place to go is the 3560 software configuration guide on the DocCD which you can find here: