I suggest you ...

improve Xen storage performance

38 votes
Vote
Sign in
Check!
(thinking…)
Reset
or sign in with
  • facebook
  • google
    Password icon
    I agree to the terms of service
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    jamesjames shared this idea  ·   ·  Admin →

    12 comments

    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
    • facebook
    • google
      Password icon
      I agree to the terms of service
      Signed in as (Sign out)
      Submitting...
      • Tobias KreidlTobias Kreidl commented  · 

        Again I'm asking if there are any updates. In particular, iSCSI performance continues to be sub-par, especially I imagine due to some of the netback drivers in dom0. Going directly from a VM to storage using open-ISCSI, I can get up to 4x the throughput. Where do things stand with this and are there likely to not be any major changes until Windsor is released, or will some of these issues be improved upon in the current architecture?

      • Myles,

        Thanks for the comment. This sounds like a request for a sligtly different feature to what is being described here -- this is about the Xen PV block path, whereas it sounds like you're maybe talking about using iSCSI directly?

        Also, it sounds like you're talking about XenServer, as opposed to the core Xen platform -- please make a note of that in the title of the ticket when you create it.

        Thanks!
        -George

      • Myles GrayMyles Gray commented  · 

        I would like to add to this that inclusion of VMWare's VAAI commands (standard t10 SCSI commands) would make huge differences if they were included in the iSCSI initiator

        Thus Xen would have "hardware accelerated" storage on any VAAI approved storage appliances or LIO/QuadStor installations.

        We have seen massive improvements in cloning times etc from these commands - and is the only reason we aren't moving to Xen from VMWare.

      • Like vhdx, ceph support is an orthogonal issue to performance. Xen uses qemu to get ceph support; this configuration is called qdisk. All disk performance improvements start in blkback before making their way to qdisk, but will reach it eventually; those improvements will improve all qdisk-based protocols, including ceph.

      • Tobias KreidlTobias Kreidl commented  · 

        George: Thanks and when I spoke of XenServer, I meant of course issues that are inherent to Xen and indeed usually trickle up into XCP/XenServer.

        That's great news on Wei Liu's work -- I know the event channel issue created some rethinking and delays, but it appears this is getting under control. I speak for all when saying we appreciate all your efforts in the development area and willingness to listen to user input.

      • Re the network: That would come under "Improve Xen network performance". :-) But we are also working on major improvements on the network as well. I know Wei Liu has prototypes that get near line-speed for the network; he's now getting those into shape to be upstreamed into Linux. From there they will make it through the distros as well as downstream projects such as XCP.

      • When this forum uses the word "Xen", it means the core open-source project, which includes the Xen hypervisor and the Xen components of the upstream Linux kernel. Major new technical functionality is first implemented there, and then makes its way into downstream projects such as XCP and XenServer.

        Because there are so many requests for XCP on this forum, we have asked the XCP developers to become involved; if you want their attention, you need to specifically label a request "XCP". The Windsor program is primarily configuring XCP (and thus XenServer) to use existing features of Xen: domain 0 disaggregation, driver domains, &c.

        Features that make it into XCP will also make it into XenServer. If you are a XenServer customer, however, you will probably be better served at present by going to the Citrix XenServer forums.

      • Tobias KreidlTobias Kreidl commented  · 

        Also, when you talk about improvements, do you mean with the current XenServer version and the netback driver under up to XS 6.1 or are you talking about the next generation Windsor architecture, which will tap into Cephs? Some tuning params in dom0 could probably help at least somewhat with some of the 10 Gb performance issues.

      • Tobias KreidlTobias Kreidl commented  · 

        VMware has near-native Linux disk I/O performance. The netback end needs major improvement for better storage I/O. Even 10 Gb connections are slow. Open iSCSI conenctions from VMs to XenServer work way betetr, so there must be a lot of overhead on the SR end.

      • BTW, I took the liberty of editing this item to give it more focus.

        A bit of feedback on the other items:

        *release blktap3 - disk back end driver

        This is not a feature, this is a mechanism; it's much better if you say what you want to accomplish (e.g., improved I/O performance, support for vhdx, &c), and allow the engineers to decide what the best mechanism is to implement it.

        *support for vhdx format

        This has nothing to do with performance, so it's better if you create a separate request for this.

        *review iscsi performance - lot of users have complained about iscsi initiator from VM better than thru Dom0

        Thanks for raising this -- this sounds like a bug; the best thing would be is one of these users with a problem were to report it to xen-devel. We may be able to help people get better performance, particularly with the new storage improvements we've been working on in recent kernels.

      Feedback and Knowledge Base