Skip to Main content Skip to Navigation
Documents associated with scientific events

Performance evaluation targeting Quality of Experience, Keynote presentation

Gerardo Rubino 1
1 DIONYSOS - Dependability Interoperability and perfOrmance aNalYsiS Of networkS
Inria Rennes – Bretagne Atlantique , IRISA-D2 - RÉSEAUX, TÉLÉCOMMUNICATION ET SERVICES
Abstract : When we must evaluate the performance of a computing facility, a communication network, a Web service, we typically build a model and, then, we use it to analyze one or several metrics that we know are important for capturing the performance aspect of interest of the considered system (mean response time, mean backlog of jobs/packets/requests/..., loss probabilities, etc.). Typical tools for analyzing the model are queuing theory results, Markov chain algorithms, discrete event simulation, etc. If we specifically consider the case of applications or services operating on the Internet and focusing on video, or audio, or voice content (IP telephony, video streaming, video-conferences...), in most cases the ultimate target is the perception the user has about the delivered service, how satisfied she is with its quality, and this perception concentrates in that of the quality of the content (how good was the reception of the transmitted voice over the IP channel, or of the play of the movie requested to the VoD server, etc.). We call it Perceptual Quality (PQ), and it is the main component of the user-centered Quality of Experience for these fundamental classes of applications. In theory, PQ is the mandatory criteria to take care of the user when designing the system. Needless to say, these classes of apps and/or services are responsible for a very important component of today's and tomorrow's Internet traffic, and they represent a large fraction of total traffic in volume. The PQ is usually evaluated using subjective tests, that is, by means of panels of human observers. In this area many standards exist, depending on the type of media, the type of usage, etc. A subjective testing session provides, at the end, a number measuring the PQ, that is, quantifying it. When quantifying (when measuring), we typically refer to the Mean Opinion Score (MOS) of the video or voice sequence, and a standard range for MOS is the real interval [1,5], `1' the worst, `5' the best. In this presentation, we will argue that using an appropriate approach to measure this PQ, we can rely on our classic tools in performance evaluation (queuing models, low level stochastic processes, etc.) while focusing our effort in analyzing directly this PQ central aspect of our systems. Instead of saying "if the offered traffic and the system service rate satisfy relation R, then the throughput of the system is high, which is good, but the delay is also a little bit high, which is not very nice...", we can say "if the offered traffic and the system service rate satisfy relation R, then the PQ is high enough". That is, instead of showing how the throughput, the mean backlog, the mean response time, ... evolve with some parameters that can be controlled to tune the system's performance, we can work directly with the PQ, the ultimate target, and still use our M/M/* queues, Jackson networks, or whatever model is relevant in our study. This allows obtaining results concerning our final goal, that is, keep the user happy when looking at the video stream, or when using her telephone, at a reasonable cost. Detailed examples will be given using the author own proposal for the automatic measure of PQ, called PSQA (for Pseudo Subjective Quality Assessment) and based on Machine Learning tools, that provides a rational function of several parameters returning the current PQ, parameters that may include those low level metrics. Commented references will be provided in the slides of the presentation.
Complete list of metadata

https://hal.inria.fr/hal-01962938
Contributor : Gerardo Rubino <>
Submitted on : Monday, January 7, 2019 - 2:35:09 PM
Last modification on : Thursday, January 7, 2021 - 4:19:35 PM
Long-term archiving on: : Monday, April 8, 2019 - 12:47:37 PM

File

IT41 EPEW18.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01962938, version 1

Citation

Gerardo Rubino. Performance evaluation targeting Quality of Experience, Keynote presentation. EPEW 2018 - 15th European Performance Engineering Workshop, Oct 2018, Paris, France. pp.1-48. ⟨hal-01962938⟩

Share

Metrics

Record views

109

Files downloads

321