The CSM XII election cycle has just started, and I'm hoping that I can secure your vote. Or at least a position on your voting ballot. (I'd like position one, of course ;) ) Closer to the voting opening I'll likely be making some recommedations. Anyway, take a look at my post and please consider me.

Category Archives: Dev

market.fuzzwork.co.uk

With the introduction (a wee while ago) of the new eve market data endpoint, and the problems which surfaced at Eve Central while that was happening, I decided to create my own market data website. And so, I present market.fuzzwork.co.uk

I posted it was out a few days ago, but thought I should write a little more about it, and the process of creating it.

Background

At fanfest, in one of the round tables, I brought up (again) the idea of an endpoint to get the order book for an entire region at once. While most people just want aggregates, that’s not something that would be easy for CCP to provide for multiple items, at the same time. In part because the aggregation takes time, and because the multiple items makes caching that data a lot harder (different people asking for different lists of things). Lo and behold, a short time later, such an endpoint came into being. It’s paginated into batches of 30,000 items, which is enough for most regions, though The Forge has 9 pages.

Details

So I rented a new server (with an offshoot of OVH called kimsufi) and set about building one. It’s a PHP site, running on an NGINX server, with a PostgreSQL database, and Redis for caching the aggregates.

The downloading script I wrote in python, and after an abortive attempts at aggregating the data, first in the database itself, then breaking it out into the script to process, I settled on using Pandas to do the aggregation. The script version would have taken over an hour to process. The Pandas version runs in a few minutes. This lets me do the grabbing of data once every 30 minutes, retaining it for a couple of days; that means you can look at snapshots of an order over that timeframe, to see how it changes.

That retention brought problems of its own. Not so much in keeping the data (each grab adds a couple of hundred meg of data and indexes.) but cleaning it up. Or being specific, the effects on the import, of having deleted it. Turns out the database doesn’t like it when you’re inserting and deleting 1.7 million rows every thirty minutes. It’s down to how it stores it. I won’t get into technical details, but it went from a couple of minutes, to over 15. which impacted kind of negatively on the performance of the site. The process wasn’t taking much CPU time, but it completely pegged the disk activity at 100%, and led to the site timing out. Not good.

How to solve this issue? One way would be to get a server with SSDs. Unfortunately, these are kind of expensive, relative to the one I’m using. I’m not made of money, after all. So I put together a Patreon campaign. If you want to contribute to the upkeep of the servers, I’d appreciate a pledge, however small it is. (Small is good in fact. My expenses aren’t that high. I’d feel bad about offloading everything to someone else)

However, a thought came to me recently. I’m using Postgres, and it can partition data, based on specific conditions. So I can have the data being stored in a partition based on the order set, and just truncate out partitions as they age out. This is far more efficient than deletion, and shouldn’t impact on the import speed. It’s not fully tested yet, but it’s looking somewhat better already. It’ll increase my data retention a bit (up to 200 samples (4 days ish), rather than the 96  (2 days) I was planning) but space isn’t too much of a concern that way. And partitions allow for more efficient queries.

Future

I still need to write the API interface for it, so you can request an arbitrary number of aggregate/region combinations, but that isn’t too far out. And I’ll provide a sample method for importing into google. In addition, I’m providing the downloaded orderbook as a gzipped csv, for an undetermined period of time (it’s all dependant on space, tbh. and zipped they’re not huge)

I also need to decide how I’m going to winnow the aggregates. The API will only allow for ‘live’ data, as that’s all I’m keeping in redis, but all the data is also being kept in a database table. I’ll likely add a page so you can see how they change over time. To keep the table from becoming an absolute monster, I’m thinking I’ll keep only a week at 30 minute intervals, then averages for the day, if you want to see over a longer time frame.

In case you’re interested, I’m generating a weighted average (price* volume summed up, then divided by the total volume), the max, the min, and a weighted average of the lowest/highest 5% of the orders (lowest for sell, highest for buy) ignoring any price more than 100 times the lowest/highest price. It’ll help when someone puts in a buy order for a million Gilas, at 0.01 isk. Which otherwise completely screws with the average prices.

Other

Oh, and I put together a very simple fleet tracker. Doesn’t do anything other than show you where fleet members are, and what they’re flying. Needs a CREST privilege, and a link you get from the fleet window. And for you to be fleet boss. I may expand it a little in the future. Maybe for invites too. Though that would require me to store the access and refresh token somewhere other than the user’s session.

As normal, all my code is on my github. Might be a little behind the release, but that’s purely down to me not committing it yet.

I’m open to suggestions, if you have them.

Anyway, Steve signing out.

New SDE Migration process

CCP Tellus is a wonderful person.

Until recently, the SDE (Static Data Extract) was a mix of YAML, an MS SQL server backup, and an SQLite Database for universe data. This was a bit of a pain to convert, with a whole bunch of steps.

Now, the SDE is being provided as a bunch of YAML files. Now, these, by themselves, aren’t a hugely useful format. But a single format makes it easier to write a conversion routine. I can convert into several formats, by running a single script (now that I’ve written it) This makes me a very happy man.

https://github.com/fuzzysteve/yamlloader is the repository for the new script. It’s written to convert into mysql, sqlite, postgres, and postgres in a specific schema. And it’s pretty simple to extend to other formats (MS SQL is a possibility, but as I can’t run that on my linux server, I’m not doing it at the moment)

In addition, with the SDE being in YAML, it’s a lot easier to generate the differences between versions. I’ve loaded it into a git repository. This is publicly available. git://www.fuzzwork.co.uk/sde-tq.git

Feel free to look into it 🙂

 

So, short version: I have a new SDE migration process. It may lead to slight differences in output, but the process is a lot easier for me. And if I get hit by a bus, it’s a lot easier for someone else to take over.

Using Crest with Excel: Crest Strikes Back

If you’ve read my older post on it, using Crest with Excel can be a complete pain in the ass. You have to load in a module, then write something in VBA, using it, because VBA doesn’t support json natively, and doing it with a user defined function, well, I never got that far, before throwing my hands up, and walking away.

Now, I got that working, but I didn’t like it. I ran into power query for a while, and it kinda worked, but I didn’t find a way to easily get it to pull a bunch of data, and reprocess it the way I wanted. So not so good for CREST prices.

So I was still on the lookout. And I recently came across xlwings, and it made me very happy indeed. I like Python, and I’m fairly good with it now. So, combining the two seemed like a winner.

After a little playing with it, I got it all working, and I now have an excel sheet, which does CREST lookups, and all it needs, is python, xlwings, and requests installed. So, to the instructions:

Install python. I’ve been using 2.7.10, but 3 should work just fine too. Make sure it’s in your path.

Install pywin32 (I’m using this one, for 2.7, 32 bit)

Using Pip, install xlwings. (you’ll do this in a command prompt)

[where you installed python, like c:\python27]\scripts\pip install xlwings

Using pip, install requests.

[where you installed python, like c:\python27]\scripts\pip install requests

using pip, install requests-security, to stop it bugging you

[where you installed python, like c:\python27]\scripts\pip install requests[security]

Now, go to a directory where you want to create your new workbook, using your command prompt. run the following command (myneatcrestbook, is the name of your new workbook. change it now if you want.):

xlwings quickstart myneatcrestbook

If this fails, it’s probably because your python scripts directory isn’t in your path. So try

[where you installed python, like c:\python27]\scripts\xlwings quickstart myneatcrestbook

With that done, you’ll find a new directory, containing an excel sheet, and a python file. open the excel sheet, and enable macros (it’ll probably just prompt you). You’ll also want to turn on the developers ribbon, in the file->options->customize ribbon option.

On the developers ribbon, click ‘macro security’, and check the ‘trust access to the VBA project object model’. hit ok.

Then click the ‘excel add-ins’ button, and click browse. Browse to [where python is]\Lib\site-packages\xlwings, and select the xlwings.xlam file. hit open, then check the box next to xlwings.

You should now have an xlwings ribbon entry.

 

That done, you can add code to the python file, come back to excel, and hit ‘import python UDFs’, to make them live.

When you want it to reprocess your functions (say, to update the data again, and get new prices) use ctrl+alt+F9

Below you’ll find a copy of a simple script to get the 5% average sell price. called with =crestSellPrices(34,10000002)

from xlwings import xlfunc

import requests

@xlfunc
def crestSellPrices(typeid,regionid):
    response=requests.get('https://public-crest.eveonline.com/market/{}/orders/sell/?type=https://public-crest.eveonline.com/types/{}/'.format(int(regionid),int(typeid)))
    data=response.json()
    count=data['totalCount']
    numberOfSellItems=0
    sellPrice=dict()
    for order in data['items']:
        sellPrice[order['price']]=sellPrice.get(order['price'],0)+order['volume']
        numberOfSellItems+=order['volume']
    # generate statistics
    if numberOfSellItems:
        prices=sorted(sellPrice.keys())
        fivePercent=max(numberOfSellItems/20,1)
        bought=0
        boughtPrice=0
        while bought<fivePercent:
            fivePercentPrice=prices.pop(0)
            if fivePercent > bought+sellPrice[fivePercentPrice]:
                boughtPrice+=sellPrice[fivePercentPrice]*fivePercentPrice;
                bought+=sellPrice[fivePercentPrice]
            else:
                diff=fivePercent-bought
                boughtPrice+=fivePercentPrice*diff
                bought=fivePercent
        averageSellPrice=boughtPrice/bought
    return averageSellPrice

CREST market data

This has been out for a while, but I’ve now got something public ( https://www.fuzzwork.co.uk/market/viewer2/ ) out there, I thought I’d write some words.

Of all the things which have happened in my CSM term, the availability of ‘live’ market data is one of the ones I’m happiest about. It might not seem like a big step, but we now have access to data directly from Tranquility, in a way that has a very low cache time. (I’m expecting some future ones to be even smaller, but that’s more specific data) It’s been a long time coming, and is a step towards removing the ‘requirement’ for the grey area of cache scraping. (Along with the dgmExpressions table being added to the SDE). Grey areas are bad, because they lead people to push the boundaries, and I don’t want to see someone banned for that.

But enough electioneering (Vote for me! 🙂 ), I thought I’d write a little about what it’s doing. I had help in the form of Capt Out’s Crest Explorer which gave me a head start, rather than needing to implement the core ‘Log into the SSO’ bit. After that, it’s just a matter of:

  • Grab the region list. (filtering out the Wormholes)
  • Grab the base market groups.
  • Wait for the user to click something
    • If it’s a market group, grab that specific group, and loop through it for direct children
    • If it’s an item, if a region is selected, grab buy and sell orders, before cramming them into tables
    • If it’s a region, grab the region data for where to get buy and sell order data

That’s pretty much it. There are some twiddly bits (like languages) but the rest of it is just AJAX calls, and looping through the result sets for display. I’d hope the code is fairly simple to understand (if you know any javascript/jquery) but if you have any questions, feel free to throw them my way.

https://github.com/fuzzysteve/CREST-Market-Viewer for the source. MIT licensed.

Eve Vegas

I know I’ve been light on posting. Sorry about that, I’ve just not had much to talk about that I can talk about publicly. At least, not that hasn’t been covered in detail by other people. Sugar Kyle has been great at this. Lots of writing. 🙂

However, I’ve been poked by people that think I should be communicating more (Thanks Sugar) so I’m trying to pick my writing up again.

By now, you’ve probably read a bunch about Eve Vegas, and the shenanigans people got up to there. I enjoyed myself greatly there, and spent a fair quantity of time talking with people, getting a general feeling for how things are going down. With Phoebe, the general feel I got was resignation that, while the jump changes might impact on gameplay they liked, it was for the good of the game in general. While not the most upbeat feel, sometimes you just have to rip the plaster off.

For those that watched the stream, yes, they called my name for a prize. Unfortunately, I was in the roundtable room, where I thought I would be more useful (I caught the presentations later) so I didn’t hear. Oh well, didn’t want those headphones anyway 😉

From a 3rd party dev standpoint, Foxfour’s presentation and roundtable didn’t really reveal particularly much. However, it did point to a renewed appetite within CCP for getting us the tools which we need, to be able to do stuff. While we don’t yet have a timescale for getting authenticated CREST on TQ, he announced that we should be getting it on SiSi in the nearish future, first as part of a trial much like the SSO trial, and then moving onto an automated version. And SSO on TQ, for most people, should be out very soon indeed. Just waiting on a few crossed i’s and dotted t’s.

Capt Out’s CREST presentation is worth a watch. you can find it on Youtube. It’s not a recording of the one from Eve Vegas, instead being a redone one, with better production values. The roundtable room wasn’t good for recordings.

One of the big things I came away with, from talking with people, was a reinforced understanding that Discoverability is one of Eve’s key weaknesses. From ‘how do I find a community to be part of’ to ‘How do I change my own market orders, without going via the order screen’. So many options, with google often being the easiest way to find an answer. The NPE talk from Rise did touch on this, but it’s a little more endemic than that. It’s something that CCP will need to focus on, over time. For both New players, and Veterans.

 

Oh, and I’m running for CSM X. And I’m calling it that for a host of good reasons, not just because it let me update the ‘Vote Steve Ronuken for CSM 9’ tshirt that I have, with a sharpie. It’s an early announcement, but I’m hoping that people read the minutes which should be out shortly, and think I’m doing what they voted for.

 

 

 

 

Zoom zoom 😉

Updated Blueprint Details

I’ve been a trifle remiss in posting of late, especially with the big changes which have come out lately. Sorry about that.

However, I have kept on top of getting the SDE conversions out, including the blueprint details, yanked out of the new blueprints.yaml file, and shoved into appropriate database tables. The good news is, they’re a lot easier to work with than the old versions. The not so good news is, I’ve not been back porting them into MS SQL server, so that’s a step you’ll have to manage yourself (Should be pretty easy, tbh. If you have trouble, give me a yell, and I’ll write something up)

The Tables:

ramActivities

This isn’t actually a new table. it’s just handy for knowing which activity id is which actual activity. such as 1 is manufacturing.

industryActivity

If you need to know how long it takes to do something, this is your table. Blueprint type id,  activityid, time in seconds to do it. This is a time unmodified by TE, so you’ll be reducing it.

industryActivityMaterials

This is the core table for most of you good folk. blueprint type id, activity id, material id, quantity, and if it’s consumed. With the changes coming in phoebe, consume is deprecated. I’m leaving it in, set to 1 for everything, so things won’t break, but it’ll go away in time.

industryActivityProbabilities

For invention and reverse engineering. No longer do you need to create a table, CCP provide all the information.

industryActivityProducts

The outputs from a blueprints, with its various activities. Handy, when you want to be able to determine what is invented from what. Also includes the number of items created in a batch, or the number of runs from invention and reverse engineering.

industryActivityRaces

This one will be going away. Prephoebe, it let you know what you could reverse engineer from what, race wise. That’s no longer needed.

industryActivitySkills

The skills to do each activity.

industryBlueprints

I needed somewhere to keep the maxProductionLimit value. That’s what this table is for.

 

As you can see, everything is a lot simpler to query now, everything tied to the blueprint type id, rather than to the final product. In the examples below, I’m using expanded Cargohold IIs. 1320 is the blueprint, 1319 is the actual thing.

[lang=sql]
select typeName,materialTypeID,quantity
from industryActivityMaterials iam
join invTypes on (iam.materialTypeID=invTypes.typeID)
where activityid=1 and iam.typeid=1320
[/lang]
To make something.

[lang=sql]
select typename,materialTypeID,iam.quantity
from industryActivityMaterials iam
join industryActivityProducts iap on (iap.typeid=iam.typeid)
join invTypes on (iam.materialTypeID=invTypes.typeID)
where iam.activityid=8 and iap.activityid=8 and iap.producttypeid=1320
[/lang]
To invent the blueprint to make them. This gives the materials for the invention, but not the required blueprint. To get that, you could just pull iam.typeid.

 

 

The formulas for using all these figures have been drawn together by Qoi, in this forum post https://forums.eveonline.com/default.aspx?g=posts&t=362493 which is handy. The core one is:

quantity after adjustment= max(runs,ceil(round((quantity(1-(ME/100))(1/(100-facility bonus))(1-(team bonus 1/100))(1-(team bonus 2/100))),2)))

The round, in case you’re curious, is there to eliminate a pesky floating point rounding error, which leads to some materials being 1 too high.

Google Docs, the XML API, and the nightmare of importXML

I loathe importXML. Truly despise it.  Sure, it works for some things, but the moment that you want to get more than a single piece of XML data back from each call, it becomes a problem.

Sure, you can have an importxml call which has multiple attributes in the xpath entry, but that depends on the site never changing the order that comes back. Which isn’t the world’s best idea.

There is, however, a solution. It’s one that I’ve been looking at recently, when Mynnna linked a forum post to me about someone loading asset information into google docs. It’s a neat solution, especially because the asset api is a more complicated one, with nesting (Assets can exist within Assets).

That got me looking at other APIs, and led to the creation of a new github repository where I’m keeping the code of some new custom functions for loading data. I’m not using them myself, but that’s because I’m a bit of a database snob. I can do things there which would be an absolute PITA to do in a spreadsheet. However, I am a code junkie, so this lets me scratch that itch (I can do this better! Tada!)

 

I thought I’d explain a little of what goes into these functions, basing it off the simplest of them. The outpost loader function.

function loadOutposts(){
  var outposts= new Array();
  var url = "https://api.eveonline.com/eve/ConquerableStationList.xml.aspx";
  var parameters = {method : "get", payload : ""};
  var xmlFeed = UrlFetchApp.fetch(url, parameters).getContentText();
  var xml = XmlService.parse(xmlFeed);
  if(xml) {
    var rows=xml.getRootElement().getChild("result").getChild("rowset").getChildren("row");
    for(var i = 0; i < rows.length; i++) {
      outpost=[rows[i].getAttribute("stationID").getValue(),
                 rows[i].getAttribute("stationName").getValue(),
                 rows[i].getAttribute("stationTypeID").getValue(),
                 rows[i].getAttribute("solarSystemID").getValue(),
                 rows[i].getAttribute("corporationID").getValue(),
                 rows[i].getAttribute("corporationName").getValue()
                 ]
      outposts.push(outpost);
    }
  }
  return outposts;
}

If you want to have a function that returns a bunch of rows and columns, you have to build an array of data, and return that.

An array is a simple idea. It’s way of bundling up information. A single dimensional array is a list. A two-dimensional array is a table (like a spreadsheet). You can have more levels, which become somewhat harder to visualise (3 dimensional: a table where a cell can contain lists of data. and so on)

So you have to take the XML data, and format it into a 2 dimensional array. Thankfully, this works easily. Each row in the API, becomes a row in the array. You build a list, and add it to the bottom at the total array (known as a push)

Step one: get the XML. that’s what the UrlFetchApp.fetch line does. At this point, you just have a bunch of text, which happens to be XML.
Step two: turn the text into a data structure your code can deal with. That’s the XmlService.parse line. You now have a data structure that your code can access, and loop through.
Step three: Make sure it was actually XML you had. If the parser couldn’t handle it, it’d have made the xml variable false, so the if (xml) test would fail.
Step four: loop through it. We don’t care about anything but the rows, so we burrow down to that level, (the getChild and getChildren line). With that split off, we now have an array (a 1 dimensional list) of rows. The for loop iterates through that, with a counter that points to each entry. The address of the entry/
Step five: Create a list of data about each outpost. Because we’re working with a proper XML engine here, it’s case sensitive on attribute names, unlike importXML.
Step six: push the list of data about a specific outpost onto the bottom of the list of outposts
Step seven: Repeat steps five and six until it’s finished.
Step eight: Hand the array back, with a return statement.

That’s pretty much the core of all the functions. Some are a little more complex, because they’re turning typeids into typenames and so on. That’s done by generating an array where the address is the typeid, and the other entry is the name. You can see an example of that in the MarketOrders function, for stations and typeids. If you’re wondering why I’m adding a _ to it, it’s to make sure it’s treated as text, so no funny business with number comparison or arrays with holes happens.

For the Market Orders function, you’ll also notice it requires the moment.js library loaded. This is because the scripting engine behind google sheets handles dates very badly. moment.js makes it far easier to do something like add 90 days to a date.

Anyway, the required steps to get one of the scripts working (excluding anything like the moment.js library, and additional sheets, mentioned in the comments for specific scripts)

 

  • open up a new, or existing sheet.
  • Tools menu -> Script editor
  • If you haven’t done this before, for this sheet, it’ll ask you what you want to create a script for. pick ‘Blank Project’
  • Delete the text which it shows ( the function myFunction() { } stuff. )
  • paste in the text for the function of mine you want to add.
  • As long as there aren’t other specific steps, you’re now done. The Eve Central and outpost scripts don’t need anything else.

At this point, you can just use the functions in your regular sheets 🙂

 

If there are any APIs you can’t get working, give me a yell, and I may be able to help.