Skip to end of metadata
Go to start of metadata

Workload Model and Template Introduction

What is a Workload model or template? It's a model of storage I/O characteristics that represent application behavior. This model can be used to emulate the Workload to support numerous activities across the storage lifecycle development, pre-production or production uses like testing, technology or design evaluation, capacity planning, performance analysis and optimization.
WorkloadCentral workloads are designed to use the unique capabilities of the Virtual Instruments (formerly Load DynamiX) Load Generation Appliances and cannot be used with other products.
When you select Edit Your Workload Models from the homepage of WorkloadCentral it will take you into the Workload Library and by default will show you all the workloads you have created.

This workloads could have come from analysis of your production data or by cloning an existing workload from the Workload Templates, Workload Examples or other Community contributed Workloads.

Cloning Existing Workload Models

Cloning allows you to copy an existing workload and modify it whether you were the author or not. Once you clone a workload it now yours to rename, modify and share.

To create a new Workload based on an existing template, example or community workload simply select clone.
Selecting the clone button will pop up a small dialog (below). Enter a name for your Workload and select and clone to complete the creation of your Workload. 
 
If successful, you will see the following Workload screen and a new Workload will have been created and added to the Workloads Library. 

You can see a summary of the commands used in the workload by expanding the Workload Contents section.

The following buttons and actions are available:

 Edit allows you to edit the Workload settings
You can clone the current Workload. 
You download the current Workload to a zip file.
 Allows you to delete your Workloads you have contributed to the community only. You will be prompted to confirm this action.


 Provides context sensitive information on each Workload model section and the meaning of the currently selected value. Simply mousing over the symbol will open the tool tip.

Common Workload Concepts and Settings

Each protocol will have specific setting available to describe the I/O access, read/write behavior, metadata, file or object system, pre-test, data, error handling and other relevant parameters that control the Workload specific behavior. These settings can be edited and saved with the Workload and can also be modified at the time the Workload is started. 
We'll start by describing the sections common to all Workload; they are the Name, Description, Tags, Privacy, Data Parameters, Load profile and Tracing parameters. Protocol specific parameters will be discussed in subsequent sections.

Workload description and tagging


The Workload name, Workload description and tags can be edited by simply clicking in each text box. All normal navigation and edit keys are supported.
A tag will be created from text typed in tags box and will be automatically created from one or more words when you hit return. To remove tags simply click on the "x" on each tag as highlighted above. It is recommended that best practices be considered for each environment to ensure all team members are utilizing tagging using the same standards.

Access Patterns

Access Patterns are key to understanding how frequently and by what means the storage is accessed. This is an area where it is most important to consider several use cases such average or peak usage and business cycles or special events such as marketing campaigns. The proper characterization of the access pattern is different for file, object and block and will be described under section dedicated to each Workload.

Writes and Reads

The writes and reads section allows you to configure the block sizes, the percentage of random versus sequential behaviors and the number asynchronous I/Os. While there is a separate section for both Writes and Reads the configuration options are the same. To choose whether a single block size or a distribution will be used, select the down arrow  next to the current value. 

Use the Write/Read Pattern to select the desired percentage of commands that will be followed by a command to a subsequent location to the previous command verses from a new random location. Dragging it to the left will decrease the random and increase the sequential percentage. The current settings indicate 1 in 4 writes will be random with 3 in 4 writes sequential. This setting is only available for Block and File protocols because Object Protocols don't allow for access within a portion of an object. For File Workload configured with small file sizes it may not be possible to achieve significant sequential access and will depend on the ratio of files sizes to block sizes.
If you want to use the same parameters for Reads as for Writes simply check the checkbox in the Reads section.

A constant block size of 512 bytes is the default value, you can select from a range of block sizes; up to 1024 KB is supported.

In addition to constant block size, you can use the bin distribution of block sizes and custom bins. It allows you to use the default or create your own custom bins. Up to 8 bins are supported. 
You remove the default bins and replace with your own custom bins by using the  button. You can specify the size range by filling in the start (from) and to (end) sizes in bytes, kilobytes (KB), megabytes (MB) or gigabytes (GB). 

You can change the ratio of size distributions by moving the sliders up to increase the value or down to decrease. You can set the slider maximum by selecting the maximum percentage value the sliders can have in the top right hand corner of the bin distribution section. An average file block size is displayed as often only the average block size in an environment is known or can be calculated. 
It is import actual typical block sizes are used to achieve an odd average block size rather than using the odd size itself. It is very unusual and not desired for systems to be configured with odd block sizes. Arrays are typically optimized to handle requests that are aligned to the array's block size.

Data Parameters


Data parameters specify the nature of the test data being created and whether it is compressible or reducible. To change the method, select the down arrow  next to the current value.
The following types of data are available in the data parameter section:

  • Constant generates the same data pattern repeatedly.
  • Sequential generates a pattern of sequentially incrementing data.
  • Random generates a randomly created data pattern.
  • Seeded Random a randomly generated pattern that can be recreated. You are able to specify the seed to be used.

  • Data Reduction allows you to specify specific compression, and duplicate ratios and number of unique duplicates. All ratios are rounded to a single decimal place. The data reduction option only applies to the FC and iSCSI Workloads.

Pre-Test

A pre-test sets up the testing environment and normally includes preparing the file or object system and LUN so the Workload can be run against it. The Pre-Test section specifies when the pre-test is run if at all. The pre-test for Block Workloads specifies the portion of the LUN to be written in the block Workload section. 

Load Properties

The Load Properties section is located to the bottom of a Workload defines the type of load and the Rampup and Rampdown period. 
You can specify the load in terms of maximum possible workers, actions per second (IOPs) or throughput. It is also important there are enough workers to achieve the desired load. As of 3.0 all Workloads now use a probability methodology (rather than loop ratios) to ensure command, block, and other distributions are achieved making it possible to have a very large number of workers but still achieve desired ratios. 
To change the value, select the down arrow  next to the current value (i.e. max below) and choose a different load measurement method.
 
Actions per second allow you to specify the load in IOPs. Actions per second include metadata operations if applicable to the protocol. 
You can specify the number of concurrent workers to execute those actions across. To edit any value, simply select the relevant text box and type in the new value. 

If you select throughput then it will be based on the traffic in KB/secMB/sec, or GB/sec from the number of concurrent workers.
Rampup allow you to specify the time taken to reach the load parameters specified and Rampdown specifies the time taken to reduce the load to zero. Editing the text boxes can change the Rampup and Rampdown values. To change the time unit from the default of seconds to minutes, hours or days, simply click on the down arrow  and select a new value.

For more information on workloads characteristics by portocol see the following sections:

For more information on Workloads and WorkloadCentral see the following sections:

 

  • No labels