Category Archives: Computers

Everything that has to do with, guess what, computers. :-)

Avira can’t get their DNS Setup right

Since many months I’m seeing the following issue with Avira‘s DNS setup, and I’m thinking it’s extremely embarassing for a company working in IT Security not to even get the basics right… 🙁

This is what I’m seeing:

named[2597]: DNS format error from 89.146.248.46#53 resolving dl4.pro.antivir.de/AAAA for client 127.0.0.1#52127: Name avira-update.net (SOA) not subdomain of zone antivir.de -- invalid response

So what does that mean?

Let’s have a look at which nameservers Avira are using anyway:

$ dig -t ns antivir.de

;; ANSWER SECTION:
antivir.de.        3600    IN    NS    ns13.avira-ns.net.
antivir.de.        3600    IN    NS    ns10.avira-ns.de.
antivir.de.        3600    IN    NS    ns9.avira-ns.net.
antivir.de.        3600    IN    NS    ns12.avira-ns.de.
antivir.de.        3600    IN    NS    ns14.avira-ns.de.

;; ADDITIONAL SECTION:
ns10.avira-ns.de.    86400    IN    A    80.190.154.111
ns12.avira-ns.de.    86400    IN    A    89.146.248.46
ns14.avira-ns.de.    86400    IN    A    74.208.254.45

Ok, so 89.146.248.46 in the error message quoted above is indeed one of the nameservers for domain antivir.de.

So let’s look up the IPv6 address record (AAAA) for dl4.pro.antivir.de on the given nameserver:
$ dig @89.146.248.46 -t AAAA dl4.pro.antivir.de

;; AUTHORITY SECTION:
avira-update.net. 3600 IN SOA ns1.avira-ns.net. domains.avira.com. 2015010301 10800 3600 2419200 3600

WTF?!

Why are they returning a domain name that is not a subdomain of the original domain?! That’s an error.

And it’s especially embarassing as this is the update URL for Avira’s AntiVir product. Again remember we’re talking about a security firm here!

Remove sensitive files from Synology debug.dat

Sometimes Synology support ask that you support a debug log. This can be done by launching the Support Center application. Then go to Support Services > Log Generation > push button “Generate logs”.

If you are concerned that you might give them sensitive information you can clean up the debug.dat file and remove the sensitive files from it.

I wrote a quick shell script that should runs under Mac OS X, but should also run under Linux. Here it is:

#!/bin/bash

DEBUG_FILE="$1"
NEW_FILE="$2"
if [ -z "${DEBUG_FILE}" -o -z "${NEW_FILE}" ]; then
    echo "You must specify the path to the debug AND to the new file, quitting..."
    exit 1
fi

if [ -z "$TMPDIR" ]; then
    TMPDIR="/var/tmp"
fi

PROG="`basename $0`"

if [ ! -r "${DEBUG_FILE}" ]; then
    echo "Debug file ${DEBUG_FILE} is unreadable, quitting..."
    exit 1
fi

if [ -f "${NEW_FILE}" ]; then
    echo "New file ${NEW_FILE} already exists, quitting..."
    exit 1
fi

EXCLUDE_PAT="`mktemp -t ${PROG}`" || exit 1

cat >"${EXCLUDE_PAT}" <<EOF
volume1/@tmp/SupportFormAttach28229/dsm/etc/application_key.conf
volume1/@tmp/SupportFormAttach28229/dsm/etc/shadow*
volume1/@tmp/SupportFormAttach28229/dsm/etc/ssl/*
EOF

tar cfz "${NEW_FILE}" -X "${EXCLUDE_PAT}" @"${DEBUG_FILE}"

rm -f ${EXCLUDE_PAT}

If this is helpful for anybody, please let me know by commenting on this article.

Terratec Cinergy Hybrid XE not working on Synology

Even on the latest 5.1-5021 version of Synology‘s DSM I couldn’t get my Terratec Cinergy Hybrid XE working on my Synology NAS — the corresponding kernel module tm6000 would always generate a general protection fault on my DS414.

Today I upgraded to a DS415+ (with a completely different CPU), and the module still crashes.

So it seems it’s a real bug in the driver, not just a defect on a single platform (which can happen e. g. due to compiler bugs).

Update 2015-06-19:

Unfortunately Synology seem not very interested in this kind of problems. I repeatedly reported this issue, but all they replied is “Thanks for your report, we’re looking into this.” Even in the latest DSM release 5.2 the issue is still present.

vzlogger 0.3.9 for Raspbian with microhttpd included

I had a hell of a time compiling vzlogger 0.3.9 for Raspbian with microhttpd included — in the end the resulting binary lacked that functionality.

After a lot of trial-and-error and “forced” the code to be included by hard-coding the following define into as follows:

#define LOCAL_SUPPORT 1
#ifdef LOCAL_SUPPORT
#include "local.h"
#endif /* LOCAL_SUPPORT */

As a convenience to those who want that functionality I’ve attached a ready-made package to this post. Let me know if this helps.

Update 2014-12-28: Version 0.4.0 package with uhttpd support available here.

Update 2015-01-05: Version 0.4.0 package based on Git source with SHA d16c0c4c8d83ab9c13f65eb51d931897e7462bc9 available here.

 

USB device draws too much power, PC will shut down

My dad in law recently asked me about a problem he had with his PC since a couple of days. When he switched on the PC he got an error message as follows:

A USB device is drawing too much power, the PC will shut down in 15 seconds.

Which it did. 😉

I asked him: “What did you do?” He: “Nothing.” Me: “Really?! Nothing at all?!” He: “Well, just connected a USB stick to copy over some pictures.” I: “Huh, so nothing… Let’s see…”

First thing I did is disconnect all USB devices (because I thought he might have done something else he couldn’t remember or didn’t want to tell ;-)). The error still prevailed.

So I inspected the front USB ports. And when I saw that I didn’t know whether I should laugh or be angry. He complete destroyed one of the USB sockets, obviously by trying to force in the USB stick the wrong way. The plastic was broken (and removed!!!), and the contacts were smashed against the “cage” of the sockets, obviously causing a shortcut (and thus this “phantom” device drew too much power ;-))

I opened the case to see whether I could disconnect just the single front USB port from the motherboard. But the two ports were connected to the motherboard with a single 10-pin connector block. I could have tried identifying the wires that led to the damaged port, but I was not in the mood for it, so I just used a screwdriver with a small flat blade to “stretch out” the contacts out of the metal cage and make sure that they don’t cause any shortcuts anymore. I then “sealed” the port with sticky tape, so that he wouldn’t use the port anymore.

Afterwards the PC booted up again as usual.

That was a 20 min. measure and cost nothing at all. I bet a computer repair shop would at least have sold him a new motherboard, if not even a new board plus CPU and RAM (since the combo is already 4 years old or something…), plus work of course.

Hope this helps people with similar issues.

Router blockiert SMTP-Server-Port(s)

Vor einige Tagen kontaktierte mich eine Bekannte, deren Domain ich auf meinem Root-Server hoste. Sie sei gerade umgezogen und könne nun plötzlich keine Mails mehr senden, wohl aber abholen. Der Mailclient meldete “Kann den Server nicht kontaktieren”. Und nein, sie habe in ihrem Mailclient definitiv nichts umgestellt. 😉

Da “klingelte” es gleich bei mir. Ich frug sie, ob sie auch einen neuen Router erhalten hätte, was sie bejahte. Meine Vorahnung ging also möglicherweise direkt in die richtige Richtung.

Was meine Vorahnung war? Nun, wenn plötzlich nach Verwendung eines neuen Routers das Versenden von Mails nicht mehr möglich ist, das Abholen aber sehr wohl noch, dann liegt der Verdacht nahe dass der Router diese Verbindungen blockt. Allerdings war mir zunächst nicht klar wieso der Router dies tun könnte.

Bevor ich weitere Untersuchungen durchführte stellte ich zunächst einmal sicher, dass mein Exim-SMTP-Server auch tatsächlich ordnungsgemäß funktioniert. Danach verschaffte ich mir Remote-Zugriff auf ihren PC mit Hilfe des kostenlosen und sehr empfehlenswerten TeamViewer.

Zum Test der Verbindung wollte ich “händisch” per Telnet eine Verbindung zu meinem SMTP-Server herstellen. Der Telnet-Client war noch nicht installiert, so dass ich ihn erst nachinstallieren musste. Dann führte ich in einem Command Prompt folgenden Befehl aus:

telnet <Mailserver-Hostname> 587

Es gab eine Fehlermeldung der Art “Connection timed out.”

Das war für mich der Beweis, dass meine Theorie richtig war. Ich verschaffte mir also per Webbrowser Zugang zur Administrationsoberfläche des (Telekom-) Routers und fand dort auf Anhieb einen Bereich, in dem ausgehende Mailserver eingetragen werden können. Die Mailserver der großen Mailprovider waren dort bereits eingetragen. Ich fügte also meinen Mailserver hinzu, und unmittelbar nach Anwenden der geänderten Konfiguration konnte die Verbindung zu meinem Mailserver wieder hergestellt werden.

Warum aber sperrt der Router unbekannte SMTP-Server? Als ich die Liste mit explizit konfigurierbaren Mailserver sah, war es mir auf Anhieb klar, obwohl ich sowas bisher nicht gesehen hatte. Der Router versucht Spambots daran zu hindern, von infizierten Rechnern aus Mail zu versenden.

An sich ja keine schlechte Idee, aber warum wird der Besitzer nicht deutlich (z. B. durch einen roten Einleger im Karton) darauf hingewiesen, dass diese Sicherheitsfunktion standardmäßig aktiv ist und welche Wirkung sie hat? Ist das womöglich auch ein Versuch, kleine Provider zu sabotieren?!

Wie auch immer, der “Einsatz” dauerte etwa eine Viertelstunde, danach war meine Bekannte wieder glücklich. Vielleicht hilft dieser Artikel ja dem einen oder anderen, ein vergleichbares Problem zu lösen. Über Feedback würde ich mich freuen.

Order in Google Play before Unix Epoch ;-)

Just noticed something funny: There’s a purchase in the “My Orders” list that supposedly happened before Unix Epoch:

Screenshot from Google Play showing one specific order -- with a date of   December 31, 1969
Screenshot from Google Play showing one specific order — with a date of
December 31, 1969

Maybe for some reason they don’t have the order date, and therefore set the date to “-1” (so a second before Unix Epoch, which is 1970-01-01)… Just guessing…

Anyway, I thought it’s funny enough to share it with you…

MSI Mega Sky 580 DVB-T Stick Support dropped by Synology

MSI Mega Sky 580 DVB-T stick users beware:

With DSM 5.0-4493 Update 4 for my DS414 Synology suddenly deliberately disabled support for my MSI Mega Sky 580 DVB-T stick.

I did not immediately notice this, since Synology do not warn you that you device has been disabled. So I missed a couple of programmes I wanted to record for my kid — thank you so much, Synology! 🙁

I downgraded to DSM 5.0-4493 and then installed Update 3 again, and my stick is still working.

Let’s see how Synology react to my complaint. If they will not enable my stick again I will complain to Amazon where I bought the device from — let’s see how they will respond…

Synology’s speed lie

Since a while I own a new Synology NAS, a DiskStation DS414. Synology advertizes this model with speeds of

Over 207.07MB/s Reading, 135.63MB/s Writing

However I never came even close to those speeds in my daily use of the DiskStation, so I tried to set up an ideal scenario in which I would get the fastest speed the NAS could deliver.

I did so by using a very fast client (a MacBook Pro Retina with a 2.5 GHz Core i7 CPU and SSD drive), and connected that directly “back-to-back” (i. e. without any network device in-between that could potentially slow the network traffic down) to one of the networking ports of the NAS.

The NAS contains three hard drives, a Western Digital WD30EURS (3 TB, max. speed according to benchmarking >130 MByte/s both reading and writing), a Seagate ST32000542AS (2 TB, max. speed at least 109 MByte/s), and a Western Digital WD40EFRX (4 TB, max. speed 146 MByte/s), in a SHR compound (technically a form of RAID5, so due to the striping involved speed should increase compared to a single drive configuration).

I then copied about 25 GB of large files (movies) between the Mac and the NAS.

The fastest speeds I could get was a meager 79.5 MByte/s on reads, and 39.4 MByte/s on writes. That was extremely disappointing, but it confirmed my subjective feeling that the NAS is slow.

To confirm the read data rates I executed the following command directly on the NAS, to have a means of a “plausibility check:”

nas1> hdparm -t /dev/sda /dev/sdb /dev/sdc

/dev/sda:
 Timing buffered disk reads: 328 MB in  3.01 seconds = 109.09 MB/sec

/dev/sdb:
 Timing buffered disk reads: 332 MB in  3.00 seconds = 110.66 MB/sec

/dev/sdc:
 Timing buffered disk reads: 392 MB in  3.00 seconds = 130.66 MB/sec

This shows that the NAS is capable of reading at a higher speed than it could deliver to the client via the network — possibly an issue with the CPU being too weak to deliver the full speed Synology promise?

Anyway, I find these disappointing results inacceptable, and it makes Synology’s statement a “lie.” Also, I found severe instability and defects with respect to the VideoStation package and recording from a DVB-T stick. Plus the massive issues Synology have with the power-saving “Hibernation” feature that never worked for me (neither on this box, nor on its predecessor DS212+.) And I’m not alone, a lot of people have the same issue, but Synology seem unable to solve it.

Considering the high price of the NAS (almost 400 EUR!), my strong opinion is that the device simply is not worth its money. It would have been better to buy a HP ProLiant MicroServer and get more power for less money. 🙁

Mac SSD speeds

There have been some complaints recently about speeds of SSD drives built into Mac computers, mostly MacBook Pro and Air. Supposedly current models are much slower than earlier models, sometimes as slow as only 50% of the transfer rates.

As I was curious I benchmarked mine. I used Blackmagic Disk Speed Test which is available for free from Apple’s App Store.

I got 416 MByte/s for writing, and 474 MByte/s for reading for my 512 MB SSD drive, which I consider pretty fast:

 

Blackmagic Disk Speed Test results

I have a MacBook Pro Retina, 15-inch, Early 2013 with 2.4 GHz Intel Core i7. My SSD is a APPLE SSD SD512E Media which is obviously made by SanDisk.

What about yours? Please comment here in my blog, giving your machine and SSD details.