A little live blog of the winter announcement.
Winter Announcement
Leave a reply
A little live blog of the winter announcement.
A little live blog of the winter announcement.
I’m not planning on stopping doing these any time soon, as I need them for my own porpoises, but I thought it might be worth having a few instructions out there, so people can run their own conversions of the EVE SDE. Just in case I get hit by a bus, or mauled by crazed Cetaceans.
The process is now a /lot/ simpler than it was before. Not as simple as I’d like, but pretty close.
With the first draft of this, there will be no screenshots, and it’s entirely from memory. I may update it when I do the next run.
This should only need to be done the once, rather than each time you want to do the conversion.
On your Windows box:
If you’re wanting to do a postgres or sqlite conversion, you’ll want a linux box too. Probably can do it on windows, but I’ve never done it there. If you have a linux box on the same network as your windows box, you could just use that for the mysql server, rather than doing it all on windows. Just update the connections below in an appropriate fashion.
At this point, you should have a converted mysql database. Export it, and then import to where ever you want to put it. For the SQLite and Postgres conversions, that’ll be onto your linux box, for the next stages.
The popularity of sites like Eve-Central should go a long way to proving that the Eve Community wants market data.
Community projects like EMDR should also show that many people, myself included, consider it near vital that we want, if not instant, time delayed market data for use in projecting figures for manufacturing.
Unfortunately, the only way to get such data, in an at all timely fashion, is to cache scrape. At least some members of CCP have stated their disapproval of this method, which lead to outcry the last time the EULA was clarified. Leading to the current state of ‘It’s against the rules, but we’ll only do something about if if you do something else bad’. It also has lead to, with the release of Odyssey 1.1, at least a period of no market data, as the cache format has been updated, which means all the scrapers need to be updated to take care of this.
I’m calling for CCP to take steps to combat this, offering a couple of suggestions on how this could be taken forward. The first, leads to the least change required for the community. The second requires more change from us, but possibly the least from CCP.
This would take the most effort from CCP. Have a client that queries TQ and throws market data out onto the EMDR network. Pretty much all the major sites which deal directly with uploads talk with EMDR. So this would lead to an immediate integration with those sites. There’d have to be thought given to the order of how the data is spewed from the firehose, more popular data more frequently, with less popular data, coming less frequently. So Tritanium in Jita would be frequent, while 100mm Imp navy plates in Minmatar faction warfare space might be once a week.
With appropriate caching on the results, this is probably the best long-term solution, as it requires CCP to introduce no new technology to their mix. A pair of endpoints, where you can request the basic order information, or the history, for a typeid in a specific region. Cache it when it’s requested, serve that same data for the next hour. You’ll still have sites like eve central, amalgamating the data for different regions, providing historical analysis, and providing their own APIs for aggregate data for multiple typeids. You’ll probably also have services like EMDR taking feeds from those people querying the api, and providing the data to everyone else on them, reducing the requirement to do each of the direct queries.
CCP could even rate limit it per IP, and the players will take care of the rest.
Perfect timely market data is a liability for traders. No exterior market data is more of a liability, as it precludes analysis.
No market data causes third-party developers pain. Kill boards won’t have an isk value to apply to kills. Manufacturers will be stuck updating spreadsheets by hand, which is far from a fun job. This will lead to burn out.
It’d be nice if CCP could do both.
I’m indebted to those individuals that have written the means for us to gather market data. I’m not looking to denigrate your work in any way, shape or form. I’m just not comfortable with the bus factor of such projects.
Of course, cache scraping is used for other projects too, such as libdogma, a very handy fitting library which can be integrated into websites. Without the ability to get that fitting data, fitting tools are very very difficult to implement.
As I don’t use MS SQL server, except when initially pulling the data, the sql elsewhere on my site may not work with it. Especially that for grabbing the bill of materials for blueprints. This is due to little things like there being no built in greater() function, and SQL server doing integer division, when you divide integers.
So, when Cilegon offered me their updated SQL to work with MS SQL server, I said ‘yes please’, so I can pass it on to all you good people. And you not so good people.
So here it is. You’ll need to create the function once. The rest is for the BOM.
DROP FUNCTION dbo.greater GO CREATE FUNCTION dbo.greater( @a INT, @b INT ) RETURNS INT as BEGIN RETURN CASE WHEN @a >= ISNULL(@b,@a) THEN @a ELSE @b END END GO DECLARE @typeID int = 26374 DECLARE @char_pe smallint = 5 DECLARE @bp_me smallint = 2 WITH cte1_base_mats AS( SELECT typeID = t.typeID, name = t.typeName, m.quantity, bt.wasteFactor from invTypes t inner join invTypeMaterials m on m.materialTypeID = t.typeID left join invBlueprintTypes bt on bt.productTypeID = m.typeID where m.typeID=@typeID union select typeID = t.typeID, name = t.typeName, m.quantity * r.quantity * -1 quantity, wasteFactor from invTypes t inner join invTypeMaterials m on m.materialTypeID = t.typeID inner join ramTypeRequirements r on r.requiredTypeID = m.typeID inner join invBlueprintTypes bt on bt.blueprintTypeID = r.typeID where r.activityID = 1 and bt.productTypeID=@typeID and r.recycle=1 ) SELECT typeID, name, quantity = CASE WHEN @bp_me >= 0 THEN ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * ((CAST(wasteFactor AS FLOAT) / (@bp_me + 1)) / 100)),0) ELSE ROUND(dbo.greater(0, sum(quantity)) + (dbo.greater(0, sum(quantity)) * (CAST(wasteFactor AS FLOAT) / CAST(100 AS FLOAT)) * (1 - @bp_me)),0) END, base = 1, perfect = dbo.greater(0,sum(quantity)) FROM cte1_base_mats GROUP BY typeID, name, wasteFactor ORDER BY name SELECT typeID = t.typeID, name = t.typeName, quantity = r.quantity * r.damagePerJob, base = 0, perfect = r.quantity * r.damagePerJob FROM ramTypeRequirements r INNER JOIN invTypes t ON t.typeID = r.requiredTypeID INNER JOIN invBlueprintTypes bt on bt.blueprintTypeID = r.typeID INNER JOIN invGroups g on g.groupID = t.groupID WHERE r.activityID = 1 and bt.productTypeID = @typeID and g.categoryID != 16