title
Products            Buy            Support Forum            Professional            About            Codec Central
 

Asset UPnP for Raspberry pi

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Spoon
    Administrator
    • Apr 2002
    • 43926

    Re: Asset UPnP for Raspberry pi

    The next round of Asset updates should fix the config on ramdisk issue.
    Spoon
    www.dbpoweramp.com

    Comment

    • Spoon
      Administrator
      • Apr 2002
      • 43926

      Re: Asset UPnP for Raspberry pi

      @Brigitte - thanks
      Spoon
      www.dbpoweramp.com

      Comment

      • dunc
        • Dec 2013
        • 18

        Re: Asset UPnP for Raspberry pi

        Originally posted by Spoon
        The next round of Asset updates should fix the config on ramdisk issue.
        Cool, thanks for the update Spoon.

        Comment

        • simes_pep
          dBpoweramp Enthusiast
          • Dec 2013
          • 288

          Re: Asset UPnP for Raspberry pi

          Originally posted by Brigitte
          Thanks Spoon & team for developing this version for Raspberry, it's working great. Thank you Simon for the excellent manual. My Linux is a bit rusty and it was a great help. If I may suggest an edit: at the bottom of the page in which you describe editing crontab to mount the nas-drive automatically, it says "@reboot sudo mount /home/nas/pi", which I believe should be "@reboot sudo mount /home/pi/nas".
          Opps - corrected that, latest version in my Dropbox @ https://www.dropbox.com/sh/bu8etep76nu4v5z/aMw624ZA5U

          Good to hear that people are finding the Guide useful.

          Thanks,
          Simon.

          Comment

          • dunc
            • Dec 2013
            • 18

            Re: Asset UPnP for Raspberry pi

            Originally posted by dunc
            ...
            The conclusion I've come to is for some reason the Pi with Asset cannot stream 192/96 WAV files from a NAS to a Naim streamer when the Pi is on the same network switch as the streamer. One solution is to move the Pi to another switch on the network. I'm guessing that this introduces a layer of buffering such that the Pi (or Asset, I'm not sure which) isn't overwhelmed trying to fill the Naim's buffer directly while also streaming the source file from the NAS.

            ....
            Update on my earlier post re issue streaming 192/24 WAVs from NAS via CIFS to Asset on Pi and on to a Naim network player.

            I found a workaround that allows you to do this when the Pi is on the same network switch as the Naim player (for those that haven't read my earlier post, this was causing drop outs in music playback).

            The workaround is to make the CIFS mount rsize buffer larger, in my case I found 512k was the smallest buffer size that didn't cause the Naim's buffer to drop when streaming the 192/24 files.

            Thanks must go to simes_pep - I was reading a post on the Naim forum (I think, or it might've been a Raspberry Pi forum) about issues with CIFS share security on the Pi with Asset. Reading that reminded me the CIFS mount has buffers that can be tuned.

            For anyone interested in the details:

            I found that the default behaviour of the mount command on the Pi was sizing the CIFS mount's rsize buffer at 61440 (60k). Taking the Linux TCP window size settings (sysctl net.ipv4.tcp_rmem) as guide, which on my Pi are "4096 87380 3843328" (min, default, max), I started tuning the rsize on the CIFS mount at 87380, rebooted the Pi and saw a minor improvement.

            Next I went for the max tcp receive window size 3843328 (3.66MB), rebooted the Pi, and now it streams 192/24 WAV files from the NAS to the Naim with 100% buffer full reported on the Naim, and no more drop outs in the music. iftop now shows the Pi bouncing between 0Mb/s and 15Mb/s for the stream from the NAS, and also bouncing between 6Mb/s and 12Mb/s for the stream to the Naim. The Naim buffer stays at 100% during playback.

            Tuning the rsize down in 1MB chunks, then in 128k chunks, I reached 384k before I saw the problem return, and going back up to 512k seems to be adequate for the CIFS rsize in this scenario. With rsize at 512k iftop shows the streaming from the NAS more consistent between 8.3Mb/s and 10.4Mb/s, and the stream to the Naim hovers around 8.5Mb/s and 9.8Mb/s. The Naim buffer shows 100% full at all times. I suspect I could fine tune to somewhere between 384k and 512k, but this seems good enough. I have 2 music shares mounted from my NAS, one for CD quality, one for higher quality files, so I've only set this on the mount for the hd music share.

            The other interesting thing I spotted while tuning this, is watching netstat on the Pi the Send-Q size for the socket to the Naim varies over time and is always well over zero. It hovers between about 19,000 and 29,000 during playback of any bit rate file, and drops back to 19,000 when playback is stopped and never drops from there back to 0 when the player is left idle - until either the socket is closed by the Naim, or the Pi is rebooted.

            From what I read about the netstat Send-Q stat, this indicates there is a bottleneck talking to the Naim.

            The Recv-Q and Send-Q for all other sockets on the Pi are 0 (including the socket for the CIFS mount to the NAS). Its hard to say why this might be without doing a network trace, however I wonder if this is part of the issue when the Pi is on the same network switch as the Naim - ie there may be some congestion on the Pi's network interface that may be causing the effective bandwidth to be limited, and this is resolved with the extra receive buffer headroom on the incoming CIFS mount. (As an aside, I also ran Asset on my NAS, watching netstat there and it shows the same behaviour with the Send-Q on the socket to the Naim - so this behaviour appears to be specific to the Naim, not the Pi.)

            I looked into tuning the default TCP write window size on the Pi to see if I could 'fix' the Send-Q behaviour (changing the rsize on the CIFS mount back to the default while doing so). Larger wmem default sizes of 1MB and 2MB both caused the buffer % pattern on the Naim to change for the better initially, then get a lot worse, and the netstat Send-Q on the Pi for the Naim socket rockets well into the 100,000s, peaking over 1,000,000. Smaller TCP wmem default size of 8k and 4k seemed to produce little difference from the original default of 16k. So I abandoned this line of tuning and set my CIFS rsize back to 512k.

            So it seems that if you see issues streaming larger bit rate files from a NAS over CIFS with the Pi and Asset, you can potentially workaround the issue by adding a new layer of buffering, eg put the Pi on another switch, or increasing the CIFS mount's read buffer.

            In my case it appears this issue may be specific to using Asset on Pi with a Naim player where the source file is large and on a CIFS share to a NAS. When I get more spare time, I might look further into the Send-Q behaviour when streaming to the Naim, and probably post any findings on the Naim forums.

            Comment

            • Batleys
              • Jun 2011
              • 38

              Re: Asset UPnP for Raspberry pi

              Dunc, interesting write up on TCP buffer sizes to optimise the TCP windowing on the Pi, However I didn't follow your point about adding another 'buffer' with another switch. TCP is at layer 3. A switches switching function, depending on config will delay a frame a certain number of bits into the frame befor onward switching, and this is layer 2. This is different to TCP buffers, and the impact to latency is extremely minimal.
              Simon

              Comment

              • dunc
                • Dec 2013
                • 18

                Re: Asset UPnP for Raspberry pi

                Originally posted by Batleys
                Dunc, interesting write up on TCP buffer sizes to optimise the TCP windowing on the Pi, However I didn't follow your point about adding another 'buffer' with another switch. TCP is at layer 3. A switches switching function, depending on config will delay a frame a certain number of bits into the frame befor onward switching, and this is layer 2. This is different to TCP buffers, and the impact to latency is extremely minimal.
                Simon
                Hi Simon,

                Yeah I'm not too clear on why moving the Pi to another switch on the network works either - prior to this I had assumed that the total switching bandwidth even on a small office/home switch such as the unmanaged NetGear GS108 would be more than plenty, and as you say latency would be tiny, although I'm not sure what the detailed spec on this switch is to be honest. This is beginning to stretch my Linux and networking knowledge, so I'm only guessing therefore that this is somehow buffering/adding latency to the link between the Pi and the Naim, such that the congestion on the Pi's network interface is not occurring. When I get some more spare time I will learn how to do some network tracing to see if I can get to the bottom of it (any tips on where to start greatly appreciated!).

                In the meantime, with this workaround in place, I'm enjoying listening to the Naim being served up by Asset on the Pi, this is a great combo.

                Cheers,
                Dunc

                Comment

                • Batleys
                  • Jun 2011
                  • 38

                  Re: Asset UPnP for Raspberry pi

                  Hi Dunc, although the additional RTD imposed by a switch is minimal, it does occur. TCP throughput is affected by TCP send/receive window sizes, network bandwidth and RTD. Typically the larger the RTD the larger the buffers need to be for a given throughput. If you are right on the edge this may come into play.
                  I found this link on Unix TCP tuning which I haven't played with on my Pi, but might be worth looking at. Clearly one is reminded here the other consideration is the number of concurrent sockets/connections, as the buffers are set for each of these.

                  Simon

                  Comment

                  • simes_pep
                    dBpoweramp Enthusiast
                    • Dec 2013
                    • 288

                    Re: Asset UPnP for Raspberry pi

                    Well, hi folks. Many thanks @dunc and @Batleys for the further input and the reference to my post on the Naim forum.

                    So I have been playing with the rsize on the CIFS mount and although I see the changes on the downstream I/O level to the RPi on iStat and there is some improvement on the Buffer on the Naim, it is still not solid to 100% from the start of the replay of the track, as it is without the transcoding of a 24/192 FLAC (or 24/176.4), or with a transcoded 24/96 FLAC file. What's interesting is that the transcoded 24/176.4 file makes it to 100% after a few seconds and is then solidly @ 100%. With a transcoded 24/192 file, the buffer starts off at 70% and is can make it to 90% or even 100% during the track, but it can also fall to 50% or 30%, therefore introducing the risk of buffer exhaustion in the ND5 and music drop. However, I believe, this is the limitation of the 'Ethernet over Mains' plugs I have to use to connect the ND5 to the switch. Which, interesting, is also a GS108, as @dunc uses.

                    So the fstab entry is currently

                    //192.168.1.3/media /home/pi/nas cifs rsize=3843360,guest,_netdev,sec=ntlm 0 0

                    So the read buffer is at the max of 3.6MBytes as taken from the $ sudo sysctl -a |grep mem command, which returns with min 4096, default 87380, max 3843360 for the net.ipv4.tcp_rmem entry. However the net.core.rmem_default and net.core.rmem_max entries are both 163840 (160KBytes), so is the max that the read buffer can be set to?

                    Anyway when transcoding a 24/192 file the upstream I/O is a constant 1.1-1.2 MB/s (9,216 kb/s for the 24/192 file is 1.125 MB/s), so the correct and necessary I/O rate for the uncompressed data stream.
                    The downstream I/O, i.e. into the RPi from the NAS is peaking at 3.8MB/s during a read, and then given that it is has 3.6MB of data to read from the buffer to serve the outgoing stream, falls back to a minimal 30KB/s level.
                    So there is no read buffer exhaustion in the RPi. CPU processing is around 30% to conduct the transcoding.

                    So given that the RPi has plenty of Read buffer, is not CPU bound to generate the 1.125MB/s output, so the limitation in my system is the upstream I/O from the RPi to the ND5, which happens to have the Devolo dLAN 500 devices.

                    Do we need to look at the net.ipv4.tcp_wmem settings, which are 4096, 16384, 3843360, so defaulting to 16k
                    The net.core.wmem_default & net.core.wmem_max are also at 163840 (160K)

                    So, anyway, back to playing FLAC, which is gives the solid & reliable buffer level and therefore playback.

                    Thanks,
                    Simon.

                    Comment

                    • Batleys
                      • Jun 2011
                      • 38

                      Re: Asset UPnP for Raspberry pi

                      Simon - we said those ethernet over mains devices were suspect ( and are awful for hifi and RFI as well - but thats another matter) - its good you have now proved they are the limiting factor.
                      Simon

                      Comment

                      • simes_pep
                        dBpoweramp Enthusiast
                        • Dec 2013
                        • 288

                        Re: Asset UPnP for Raspberry pi

                        Still not sure it is just down to the Devolo dLAN devices, given that I get a good transfer speed on the laptop to the NAS when it is connected to the other socket on the Mains Plug. The Devolo Cockpit measures the link at 200 Mbits/s (25MB/s) so it should be more than capable for the required 1.125MB/s for the 24/192 stream.
                        I could try introducing a second switch after the Mains Plug link, and place the RPi on this switch along with the ND5. So the 'Ethernet over Mains' connection is taking the compressed data, which is then transcoded by the RPi and the uncompressed stream send over normal twisted pair.

                        I still suspect something on the RPi->ND5 TCP connection, either with the default 16k wmem buffer or the Send-Q behaviour that @dunc highlighted. While playing the transcoded 24/192 files I also observed Send-Q levels of upto 29,200. However this level of Send-Q also exists when playing 24/192 in native FLAC, where the Naim buffer is solid at 100% during track play.

                        Has anyone else observed the Send-Q levels (netstat -tpn -c)?

                        Doing some initial reading "Send-Q will be that data which the sending application has given to the transport, but has yet to be ACKnowledged by the receiving TCP. "Recv-Q" and "Send-Q" mean receiving queue and sending queue. These should always be zero; if they're not you might have a problem. Packets should not be piling up in either queue, except briefly"

                        So why is there is data provided by the RPi but yet to be ACKnowledged by the receiving TCP connection (the NIC in the ND5), is the Naim buffer not at 100% and audio dropouts being experienced with the transcoded 24/192 FLAC files, but not the native 24/192 files? It looks like there is sufficient data being provided on the TCP connection from the RPi though.

                        More research and consideration needed here.
                        Simon.

                        Comment

                        • simes_pep
                          dBpoweramp Enthusiast
                          • Dec 2013
                          • 288

                          Re: Asset UPnP for Raspberry pi

                          Ok, just being playing a 24/192 FLAC file in Foobar2000 through the UPnP Browser on my laptop, over WiFi, which plays fine, however the Send-Q level on the RPi socket to the Laptop's TCP socket are much higher than RPi->ND5, with levels at 80,300 to 137,240.
                          So the Send-Q issue isn't just on the TCP connection from RPi->ND5, but RPi->Windows also.

                          So is the issue a TCP socket level issue in the RPi or an application layer issue in Asset regarding how it is handling the TCP connection?

                          Thanks,
                          Simon.

                          Comment

                          • Batleys
                            • Jun 2011
                            • 38

                            Re: Asset UPnP for Raspberry pi

                            Simon, as far as I understand it, the Ethernet over mains (and certainly most wifi) setups operate at half duplex, with directional collision detection, or perhaps with Ethernet over mains some sort of token system.
                            In short there are enforced delays to the TCP window sequencing acknowledgements that don't occur on a full duplex wired link. So it would be interesting to see to what extent the Pi queue differ between a full duplex and half duplex link when streaming media using TCP.
                            Certainly on full duplex transfer I have yet to have a buffer challenge on my Naim or any other equipment upto 192/24 WAV using my Pi, although on 192/24 WAV I notice using 'top' that the Upnp process responsible for transfer has a higher than expected CPU time share which does possibly suggest some sort of I/O congestion starting to occur on the Pi.
                            Simon
                            Last edited by Batleys; 01-12-2014, 08:40 AM.

                            Comment

                            • Batleys
                              • Jun 2011
                              • 38

                              Re: Asset UPnP for Raspberry pi

                              Guys, just did some brief none exhaustive tests on my Pi using CIFS mounts and full duplex 100Mbps UTP copper Ethernet links between devices. The switch used was an 8 port Cisco 2960. The network player used was a Naim NDX.

                              I streamed three file types. All three files showed a regular zeroing of the Send Q (every second or so).

                              FLAC to WAV 44.1/16 Send Q appears to peak around 274000 and flush to zero
                              WAV to WAV 192/24 Send Q appears to peak around 292000 and flush to zero
                              FLAC to WAV 88.2/24 Send Q appears to peak around 292000 and flush to zero.

                              Netstat -tpn -c was used to measure. The Naim showed 100% receive buffer throughout.

                              In summary all appears to be working as expected.

                              Simon
                              Last edited by Batleys; 01-12-2014, 09:41 AM.

                              Comment

                              • dunc
                                • Dec 2013
                                • 18

                                Re: Asset UPnP for Raspberry pi

                                Originally posted by Batleys
                                Hi Dunc, although the additional RTD imposed by a switch is minimal, it does occur. TCP throughput is affected by TCP send/receive window sizes, network bandwidth and RTD. Typically the larger the RTD the larger the buffers need to be for a given throughput. If you are right on the edge this may come into play.
                                I found this link on Unix TCP tuning which I haven't played with on my Pi, but might be worth looking at. Clearly one is reminded here the other consideration is the number of concurrent sockets/connections, as the buffers are set for each of these.

                                Simon
                                Thanks Simon (Batleys), yeah I noted that these settings are per socket too. That's why I tuned mine down to the smallest size that worked. Also the buffer in this case is specific to the single socket used for the CIFS mount to the NAS share, so there wasn't too much risk. That said, I did test it leaving a 96/24 wav album playing on repeat over night, with no issues.

                                I looked up the specs on my GS108. It had minimum latency of 40us on 100-100 link, and 10us on 1000-1000 link, but didn't specify for 1000-100 link. Also it appears to have a store and forward 'forwarding mode' with a 32k buffer, presumably this is only in play when there are more that 2 ports active concurrently - as is the case when I'm streaming from the NAS via the Pi to the Naim.

                                Originally posted by Batleys
                                ...

                                Certainly on full duplex transfer I have yet to have a buffer challenge on my Naim or any other equipment upto 192/24 WAV using my Pi, although on 192/24 WAV I notice using 'top' that the Upnp process responsible for transfer has a higher than expected CPU time share which does possibly suggest some sort of I/O congestion starting to occur on the Pi.
                                Simon
                                Originally posted by Batleys
                                Guys, just did some brief none exhaustive tests on my Pi using CIFS mounts and full duplex 100Mbps UTP copper Ethernet links between devices. The switch used was an 8 port Cisco 2960. The network player used was a Naim NDX.

                                I streamed three file types. All three files showed a regular zeroing of the Send Q (every second or so).

                                FLAC to WAV 44.1/16 Send Q appears to peak around 274000 and flush to zero
                                WAV to WAV 192/24 Send Q appears to peak around 292000 and flush to zero
                                FLAC to WAV 88.2/24 Send Q appears to peak around 292000 and flush to zero.

                                Netstat -tpn -c was used to measure. The Naim showed 100% receive buffer throughout.

                                In summary all appears to be working as expected.

                                Simon
                                This is interesting that you can serve a 192/24 WAV via Asset on the Pi to the same Naim player I'm using (NDX). The potential differences in our configs are many, but the ones of note from what I see are :

                                - You are using an entry-level enterprise class Cisco 2960 managed switch vs my small home/office unmanaged 2x NetGear GS108 switches. I googled the specs on each and I suspect there isn't really much in it for relatively basic scenario we're discussing, but it's a difference none the less.
                                - I'm not sure which NAS you're using? In my case, its a self-built NAS based on unRAID.
                                - The build of our Pi's is probably different. I'm using the cutdown Minibian based image instead of Raspbian, with some minor tweaking.

                                According to ethtool, all connections in my scenario are also full-duplex, and my Netgear switch is rated to a switching capactiy of 2Gbs when full duplex connections are in use for 1Gbs links.

                                In my case the Send-Q is also high for any bit rate of file played, however it never flushes back to 0 after playback for any files. The Send-Q stays high until connection is either closed by the Naim player, or goes into TIMED_WAIT.

                                I plan to replace my switches with a small home/office managed switch purely for interest's sake so I can see what going on at the switch layer, and to gain port-mirroring if/when I learn how to do network tracing.

                                As I said earlier, my workaround is now allowing me to playback all my files with no issues, I'm really only chasing this as a matter of interest, but it interesting

                                Cheers,
                                Dunc

                                Comment

                                Working...

                                ]]>