Registered Member
|
KDE needs a reporting system for testing versions which collects reports about the most common err message noise, similar to the popular Kerneloops service.
http://www.kerneloops.org/ I don't understand why launching KDE applications from the CLI usually results in so many warnings and error messages, regardless which distributions you use. KDE developers need a feedback mechanism and a testing environment to reduce that and help to detect common misconfigurations of KDE. Such a website would also present the most common crash reports. |
Moderator
|
Why not just use kde.bugs.org?
I mean I know that it's not actually for testing versions, but if anything this could just be added to it, if isn't there already. Edit: OK bugs.kde.org didn't help me find if there is something for a testing version of apps or not. I'll approve it and see what others will say.
Primoz, proud to be a member of KDE forums since 2008-Nov.
|
Registered Member
|
The focus is different. Bugs are specifications for problems and they require people to play an active role. I approximately report 1/10 of the bugs I notice and when I do so in 75% of the case it is an existing crash. This is just about a automated "feedback" to display the most common error messages, crashes and warnings. Cmp. also the new Wineoops tools http://www.winehq.org/pipermail/wine-de ... 80965.html At Wineconf 2009 Redhat nicely explained the approach: http://people.redhat.com/mstefani/wineo ... nf2009.pdf We have of course many other sources of information for developers: English breakfast Network, build reports, bug reports. It is just that it creates some objective visibility and transparency. |
Registered Member
|
Let me add another argument:
Bug reports suck QA capacity of the KDE team. There is a good reason why "quick shot" bug reporting is kind of discouraged. Many reported bugs are dublicates and when it is a crash you can almost assume that it has been reported by someone else before. KDEoops is about incidents. Like an aggregation of log files. It is more like: This month crash bug No. #xxxx resulted in 50 crashes of 1000 participants in the kdeoops feedback program. Hard facts trigger debate about code and runtime quality and it is easy to aggregate different stream of information into a larger "Dashboard", e.g. translation status, EBN, Bug opened/closed, LOC, SVN statistics, builtbots/tinderboxes errors and warnings, ... I fell that it is important to move further towards a "social development" paradigm, that is improve the user/developer interaction, application specific feedback, hard facts and discussions. E.g. think of Konqueror. Some users may want to volunteer to submit websites which don't get rendered properly or crash. Of course you could also file a KDE bugs but that would be overkill if everyone did. So instead volunteers have e.g. a simple plasma widget, which submits the structured data to a kind of database or mailing list and we get an "Oops website" for aggregated monthly or weekly reports, and then see, oh, ebay website problems reported 200 times, and you get a list of e.g. the 10 most common websites which don't work. Or when there is a new version you can specifically test all the web sites which didn't work before. etc etc. |
Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]