Category Archives: Dev

Understanding the Eve Online SDE – Part 1

How many parts is something I can’t yet answer. As many as it takes.

The SDE is a Static Data Extract from Eve Online. It contains information like where stars and planets are, how much Tritanium goes into a Typhoon, what skills are needed for which modules and so on. It’s a useful bag of information if you’re going to be doing any development of third party tools for the game. Much (but not all) of the information is also available directly from ESI, where it’s potentially more up to data. However, it can be a lot more time consuming to process there, and some is just not available. Like what blueprints need.

The official SDE direct from CCP

I’m not going to go into a great deal of depth on this. It’s not what most people use.

From https://developers.eveonline.com/resource/resources you can download the file sde-TRANQUILITY.zip

This is the main source for the SDE, and what I use to create my own versions. The main reason most people don’t use is, is that it’s a fuckton (metric, not imperial) of yaml files. Great for passing data around, not so good for looking up something.

CCP tend to update this a wee while after a release happens. Sometimes they need poked to do it. If you’re going to work directly from it, be aware that not all yaml parsers are created equal and can handle the spec a little differently. pyYAML is probably the one to use if you can, with libyaml to stop it taking forever.

The SDE release from yours truly

For a little history, CCP used to release the SDE as a MS SQL server dump. When I got started, someone else did the conversions to mysql which I used. Then they got slower and slower, and I got impatient, finally working out how to do it myself. A few years later, CCP started converting how they did their data versioning internally, moving from what they called BSD to FSD. Which meant they moved to yaml files for much of it. At this point, I totally rewrote the converter process I was using. And because I could, I set it to target a number of different database. If I get hit by a bus, you can find it at https://github.com/fuzzysteve/yamlloader It’s python 2.7, but should be simple enough to convert to 3 if you want to.

https://www.fuzzwork.co.uk/dump/ is where I keep the conversions. I don’t promise to keep them for all time, but older versions are available if you want them. the latest version is always in the latest folder or linked to with the latest links in the root.

If you’re wondering about the structure and names, most of that is a hold over from the original SDE, so that applications needed minimal changes.

Available are:

  • MySQL, both a complete dump, and individual tables
  • SQLite, a complete database file.
  • Postgresql, pg_dump file for use with pg_restore
    • the normal one, which puts all the tables into the public schema
    • the ‘schema’ one, which puts all the tables into the evesde schema. I recommend this one for people starting out with postgres. Easier to update.
  • MS SQL server, a Data Tier Application export.
  • CSV files, now in gzip, bzip2, and plain format.

The Tables (In my conversion)

There are many tables (or CSV files) in the Eve SDE. However, these tend to fall into a small number of groups only a few of which are wanted.

The ‘Stuff’ tables

invTypes

This is the core table for many applications. If you have something which isn’t a unique item, then this is probably where you’ll want to look it up. Have a typeID from ESI which you want to look up? This is that table. Skills, ships, trade goods, generic planet types, all that good stuff. There’s a published column here which basically says whether it’s actively in the game or not.

invGroups and invCategories

invTypes has a groupID column. That links to invGroups, which has a categoryID which links to invCategories. invCategories contains a list of the types of stuff which exist. Like Ships, Skills, Planets, Modules, and so on. invGroups breaks that down further to things like Assault Frigates, Battleships, Science Skills, Asteroids and so on.

Example: you want all the ships in the game. you look up invCategories and find that ships are categoryID 6 (rather than always joining in the invCategories table.)

select typeName from invTypes join invGroups on invTypes.groupID=invGroups.groupID and invGroups.categoryID=6;

invMarketGroups

Much like invGroups, invMarketGroups is referred to by marketGroupID in invTypes. a little complexity is added, because each market group can have a parentMarketGroupID. This is so you can have each assault frigate in a market group for their race, that market group in the assault frigates group, that market group in advanced frigates, that in Frigates, and that in Ships. (ships doesn’t have a parent). Getting the full tree in pure SQL is a complete pain in the ass.

invMetaGroups and invMetaTypes

invMetaGroups defines the various meta level groups. Like tech 1, tech 2, faction and so on. invMetaTypes links that back to invTypes so you can list all the T2 ships by joining it in. It also has the parentTypeID in case you want to see what the T1 version is.

select typeName from invTypes join invGroups on invTypes.groupID=invGroups.groupID and invGroups.categoryID=6 join invMetaTypes on invTypes.typeID=invMetaTypes.typeID and invMetaTypes.metaGroupID=2

invTraits

All those handy ship bonuses have to live somewhere. Links to invTypes for the ship type (typeID) and to invTypes for the skill (skillID). -1 is a bonus which isn’t related to a skill. unitID links to eveUnits so you know if it’s a percentage or what.

invFlags

This is related to the asset table in Eve. If you get back a list of your assets, they’ll have flags on them. You won’t see hangars and so on. The Flags determine where they are. Something with flag 11 is in the first lowslot on a ship, for example. This is why shared hangars are painful to implement.

invTypeMaterials

This _used_ to be used for making things. It’s not any more. it’s just what you get out of something when you recycle it. typeID links to invTypes for what you’re recycling, materialTypeID links to invTypes and is what you get out. quantity is what you’d get on a perfect (100%) refine.

invVolumes

When things change size as they’re packaged, the volume in invTypes becomes less useful. This is the volume of things which are packaged (when it’s different)

invItems

A table of unique items, where they are, who owns them and what they are. I don’t think I’ve ever seen anyone but CCP use this.

invNames and invUniqueNames

the names for the things in invItems. Don’t know what kind of something a thing is, but have an itemID? It might be in there. There’s almost always a better table to look at to get this information, because you normally know.

invPositions

Where the things in invItems are in space. like with invNames, it’s unlikely you’ll want to use this.


Industry Tables

When Crius came out, CCP changed up everything. And had a new yaml data file just for making shit. So I created a number of totally new tables from it. If you don’t like the format, it’s all my fault. Try and do better. I dare you.

A common column in these tables is the activityID column. This is to split them up into the various things you can do with a blueprint. Like manufacturing, researching TE, copying, reacting, inventing and so on. Yes, reactions are just blueprints like anything else now.

1Manufacturing
3Researching Time Efficiency
4Researching Material Efficiency
5Copying
8Invention
11Reactions
The Activity IDs

industryActivity

typeID is the blueprint typeid in invTypes. time is how long it takes to do that activity before modifiers.

industryActivityMaterials

Which materialTypeIDs from invTypes are used for an activityID when using a typeID (blueprint) from invTypes and the quantity before modifiers

industryActivityProducts

What quantity of productTypeIDs from invTypes are produced per run of a typeID (blueprint) from invTypes for a specific activityID

industryActivitySkills

What skillID from invTypes is required to do activityID on a typeID (blueprint) from invTypes and what level it needs to be.

industryActivityProbabilities

Invention has different probability depending on the typeID of the blueprint (invTypes) with a handy productTypeID to work out what it makes.

industryBlueprints

maxProductionLimit is how many blueprint copies can be on a BPC.


Planetary Interaction.

planetSchematics

The various things you can make on a planet and how long they take.

planetSchematicsPinMap

Just links the schematics to the kind of structure on planet which makes them. Like basic industry. though it is by planet type as well.

planetSchematicsTypeMap

Takes the schematicID from planetSchematics, and tells you the quantity of what typeIDs you need to put in (isInput=1) to get typeID out (isInput=0)


The Dogma Tables

Dogma is Eve’s engine for knowing stuff about things. What slot does a module go into? Dogma. What skills are required for something? Dogma. How much damage does a gun do? Dogma (though it’s hard to work out. yay)

dgmAttributeTypes

The table with the description of the various dogma Attributes. by attributeID

dgmTypeAttributes

The table which links dogma attributes attributeID to item types invTypes typeid. These days, just use valueFloat for the value. everything is condensed into that now. Handy for things like how many low slots something has. Or which skills are required. (with a separate attribute for which level it’s needed at. yay)

To see all the attributes and what they do, for a Rifter (typeid 587) try:

select * from dgmTypeAttributes join dgmAttributeTypes on dgmTypeAttributes.attributeID=dgmAttributeTypes.attributeID where typeID=587;

dgmEffects

The various bits of dogma which are just flags on items, rather than things with values. like if it’s a low slot module or not. This describes those flags.

dgmTypeEffects

Linking typeID from invTypes to effectID in dgmEffects

dgmAttributeCategories

Just splits the various attributes up into categories. Like if it’s a fitting attribute, shield, or so on. categoryID in dgmAttributeTypes

dgmExpressions

This is stuff which relates to how things actually work in eve. the code behind it. I don’t understand it well enough to explain it.


Map Tables

mapDenormalize

This has pretty much every celestial in it, and where it is, in meters, in its star system, with the star at 0,0,0

Links to invTypes so you know what some is (planet etc) by typeID and invGroups by groupID. links to mapSolarSystems with solarSystemID, mapConstellations with constellationID mapRegions with regionID.

What might not be obvious is wormhole effects are here, with a groupid of secondary sun.

mapJumps

Less useful than it might sound, it tells you which pairs of stargates link up. You probably want mapSolarSystemJumps

mapSolarSystemJumps

what solarSystemIDs have stargates to what other solarSystemIDs. also has regions and constellations

mapSolarSystems

So you can find out information about solarsystems by solarSystemID. Names, regionIDs, ConstellationIDs, where they are in the universe in meters. (yes, light year ranges stored in meters) and their security rating.

mapRegions

Simiar information, but for Regions.

mapConstellations

Take a wild guess 😉


The end, for now

There are still a bunch of tables which I’ve not talked about yet. Most are pretty self explanatory, but I’ll get to them another time. This should be enough to get you going.

Getting all market aggregates from Market.fuzzwork.co.uk – Excel

This is for Excel only I’m afraid. And only 2013 and up. Google Sheets just won’t load the size of file that this uses.

On https://market.fuzzwork.co.uk I generate aggregate data for all items sold on the market. This is things like average price, minimum price, maximum price, and the average price if you were to buy the cheapest 5% of the market. (handy for evening out any outliers.) This is available via an api, which is documented on the site. There’s a function for google sheets, and instructions on how to use it with power query in Excel.

However, this isn’t wonderful if you want to get everything. To that end, I’ve started dumping out a file with all the data in it. This is updated once every 30 minutes (or so), and is about 24MB uncompressed. Google sheets chokes on it. Excel handles it ok though. So that’s what I’m going to talk about here. I use Excel 2016, so this is focused towards that. I think it’ll work with 2013, but I have no way to test that.

The URL for the data is https://market.fuzzwork.co.uk/aggregatecsv.csv.gz and is a gzipped csv. With headers. The first column is the region id, the typeid, and if it’s a buy or sell order.

To load this into Excel, create a new sheet. On that sheet, create a new query (Data ribbon, get data, from other sources, blank query). In the advanced editor, you want to paste in:
let

let
Source = Binary.Decompress(Web.Contents("http://market.fuzzwork.co.uk/aggregatecsv.csv.gz"),Compression.GZip),
#"Imported CSV" = Csv.Document(Source,[Delimiter=",", Columns=10, Encoding=1252, QuoteStyle=QuoteStyle.None]),
#"Promoted Headers" = Table.PromoteHeaders(#"Imported CSV", [PromoteAllScalars=true]),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"what", type text}, {"weightedaverage", type number}, {"maxval", type number}, {"minval", type number}, {"stddev", type number}, {"median", type number}, {"volume", type number}, {"numorders", Int64.Type}, {"fivepercent", type number}, {"orderSet", Int64.Type}})
in
#"Changed Type"

That should then load all the aggregate data into a sheet for you. Call that sheet something appropriate, like marketdata

Then to query it:

=VLOOKUP(“10000002|34|true”,marketdata!$A:$J,9,FALSE)

Referring to the sheet name you configured. you can assemble the lookup value as you want to. Region ID, typeID, if you want buy or sell orders. true for buy, false for sell.

Something like

=VLOOKUP(“10000002|”&A1&”|false”,marketdata!$A:$J,9,FALSE)

works. Where A1 contains the typeid you’re interested in.

Google Sheets – Updating from a Menu

Something which has been a problem people have run into, is that google sheets have a problem with automatically updating figures from custom functions. You’ll often find functions which suggest having a useless reference to a cell which you can then use to trigger the updates.

This is obviously somewhat suboptimal. So as a potential solution for you good people, I give you the menu based price list.

Google Sheets allow you to have custom functions, but you can’t write from them, only return data. However, app script called from a menu option can write to its hearts desire. The downside is that you can’t pass it any parameters. So this means you must have the input data stored somewhere it can retrieve by name, and somewhere to output; named range is one option, but I’m taking the lazy option and using sheets with specific names.

So, with the link above, you can see how to do it. There’s an example sheet you can copy.

As a little explanation:

The onOpen function adds menu in. It should be possible to update the data from here, but I ran into a bug there, so it’s purely on refresh.

 

The updatePrices function is pretty much the same as the one for use with a custom function, with a couple of key differences. First, you get the two sheets which are important. Clear the output sheet so we just append to it.

Get the region id from the cell A1 of the typeid sheet.

Now it just runs through the first column, from row 2 (so it ignores the region) to the last row. Deduping the list comes next. Sure you could get the data twice, but this is more efficient.

Then it runs through the typelist 100 ids at a time, getting the json, running through it, and appending the results to the sheet.

Using ESI with Google Sheets

I’ve been meaning to write this post for a while now, so here it finally is; How to use ESI, with google sheets.

Now, the unauthenticated endpoints are relatively easy. You’ll still need to write a little code for them, or steal it from somewhere else, but other than that, it should just work. Authenticated endpoints are another matter.

ESI uses Eve’s SSO for authentication, which means, at some point during the setup, you’ll need to handle the OAuth flow to get what’s known as a refresh token. This token is much like an XML key from the older API, allowing you to access the endpoint without any human intervention. (It’s actually used to get the access token, which allows you to do things for about 20 minutes. The refresh token can be used at will to create new access tokens)

Setup

Due to the way it works, this means you’re going to have to go to the developers site, and create a new application. Each application has a client id, and a secret, and that secret should be held closely and not revealed to the general public. So you need to create your own.

So, go to the site, log in, and click on ‘application’ up at the top. You can then click on ‘create new application’.

Give it a name and a description. I’d suggest something meaningful to you for the name, but the description isn’t important; you just need something there. Pick ‘Authentication & API Access‘ as the type.

Now you need permissions. These are what are known as ‘scopes’ in OAuth. They’re like the various things you can pick in the XML api, allowing access to various different things. What scopes you pick, is dependent on what you actually want to do with the login. When actually doing the login, you tell it which scopes you want. As long as they exist in the application definition you’re filling in now, you can get them. You don’t need to ask for them all.

For this example, I’m asking for the esi-universe.read_structures.v1 and esi-markets.structure_markets.v1 scopes. This allows me to pull the market data from a structure I have access to (and get its name). Just click them, and they should move over to the box on the right.

Now comes the ‘complicated’ bit. The callback url. When you use OAuth, you get sent to the login server, log in there, then get redirected back to the source. The callback url is where you’re sent. As you don’t have a webserver for this (probably) we’re going to use a tool called Postman to handle this step.

Fill in https://www.getpostman.com/oauth2/callback as the callback url. Click on ‘create new application’.

Finally, click on ‘view application’ next to your newly created one.

Postman

Postman is a chrome app which is designed for working with APIs. We’ll be using it to handle the initial authentication. Get it from its site. Once you’ve installed it, run it.

It should look something like this. On the authorization tab, change the drop down to ‘OAuth 2.0’. Then click the ‘get new access token’ button.

Auth URL: https://login.eveonline.com/oauth/authorize

Access Token URL: https://login.eveonline.com/oauth/token

Client ID: The ID from the application page you just created

Client Secret: the Secret Key from the application page you just created

Scope: the scopes you want, separated by spaces. So in this case:

esi-universe.read_structures.v1 esi-markets.structure_markets.v1

Finally, hit ‘request token’. This should send you to the login page for Eve. If it doesn’t work, make sure you have no extra spaces at the beginning or end of the various options.

Once that’s done, and you’re authorized, you should see a token in the existing tokens box.

This lists your access token (you can ignore this, tbh. you’ll be getting a new one.), and your refresh token (this is important, so take a note of it, along with your client id and your secret.)

That’s you finished with Postman, until you want a new token for something else.

Google Sheets

Google Sheets doesn’t handle JSON or OAuth natively. This is a bit of a pain, but the ability to add custom functions takes care of that nicely. Especially when someone else writes them for you 😉

For this example, we’ll create a new workbook.

On your new workbook, create a new sheet and call it config. Put the client id, the secret, and the refresh token into separate cells on it.

On the Data menu, pick ‘Named Ranges…’. This will open a sidebar. select the clientid cell, and click ‘add a range’. call it clientid, and hit done. Repeat with the secret, and the refresh token, calling them ‘secret’ and ‘refresh’.

you can then close the sidebar. This is just to make it easy to refer to them later.

Now, click on Tools->Script editor. This will open a new window. switch to it.

Delete the contents of Code.gs (it should be an example myFunction only. if not, you’re in uncharted waters)

Paste in the functions from my Github. I’d explain them all, but that would massively balloon this post, and you’d need to understand Javascript. (They’re relatively simple)

Finally, on a sheet without data you care about (as this is a new workbook, sheet 1 works) use

=getCitadel(1023164547009)

 

and it should (eventually) populate with all the orders from that citadel. get the citadel id from a bookmark, asset id, or market order.

 

I need to do more work on this, as it’s not storing the access token anywhere (there’s an issue with the sheet doing updates on cells from custom functions)

Ore compression calculator

I’ve updated the compression calculator with a couple of new features, and stomped out a bug.

The bug was it had the wrong volume for some ores. (oops. it was hooked up to an old database.) That’s now sorted, and hooked up to the DB which updates when a new version comes out.

New Features:

Output values: You are told how many of each mineral you should get out in the end.

Ore selection: you can choose between all ores (the default) and highsec ores only.

 

Are other options for ore selection useful to you? Would you like to be able to specify nullsec only? Lowsec only? Be able to pick and choose, with max limits for certain kinds? So only 15 compressed spod, no arkanor but as much of others you want? Presets are simple from an interface perspective, full customization is somewhat more painful.

The backend, now that I’ve re-familiarized myself with it, is simple enough to update.

As always, the code is on my github respository and is can be freely used. However, it does require an external third-party library, which makes it a bit more painful to install. Especially if you’re on shared hosting (probably impossible then)

EDIT:

Nullsec only, and lowsec/nullsec (for the rig) filter added.

Bug stomped wrt Dark Ochre

market.fuzzwork.co.uk

With the introduction (a wee while ago) of the new eve market data endpoint, and the problems which surfaced at Eve Central while that was happening, I decided to create my own market data website. And so, I present market.fuzzwork.co.uk

I posted it was out a few days ago, but thought I should write a little more about it, and the process of creating it.

Background

At fanfest, in one of the round tables, I brought up (again) the idea of an endpoint to get the order book for an entire region at once. While most people just want aggregates, that’s not something that would be easy for CCP to provide for multiple items, at the same time. In part because the aggregation takes time, and because the multiple items makes caching that data a lot harder (different people asking for different lists of things). Lo and behold, a short time later, such an endpoint came into being. It’s paginated into batches of 30,000 items, which is enough for most regions, though The Forge has 9 pages.

Details

So I rented a new server (with an offshoot of OVH called kimsufi) and set about building one. It’s a PHP site, running on an NGINX server, with a PostgreSQL database, and Redis for caching the aggregates.

The downloading script I wrote in python, and after an abortive attempts at aggregating the data, first in the database itself, then breaking it out into the script to process, I settled on using Pandas to do the aggregation. The script version would have taken over an hour to process. The Pandas version runs in a few minutes. This lets me do the grabbing of data once every 30 minutes, retaining it for a couple of days; that means you can look at snapshots of an order over that timeframe, to see how it changes.

That retention brought problems of its own. Not so much in keeping the data (each grab adds a couple of hundred meg of data and indexes.) but cleaning it up. Or being specific, the effects on the import, of having deleted it. Turns out the database doesn’t like it when you’re inserting and deleting 1.7 million rows every thirty minutes. It’s down to how it stores it. I won’t get into technical details, but it went from a couple of minutes, to over 15. which impacted kind of negatively on the performance of the site. The process wasn’t taking much CPU time, but it completely pegged the disk activity at 100%, and led to the site timing out. Not good.

How to solve this issue? One way would be to get a server with SSDs. Unfortunately, these are kind of expensive, relative to the one I’m using. I’m not made of money, after all. So I put together a Patreon campaign. If you want to contribute to the upkeep of the servers, I’d appreciate a pledge, however small it is. (Small is good in fact. My expenses aren’t that high. I’d feel bad about offloading everything to someone else)

However, a thought came to me recently. I’m using Postgres, and it can partition data, based on specific conditions. So I can have the data being stored in a partition based on the order set, and just truncate out partitions as they age out. This is far more efficient than deletion, and shouldn’t impact on the import speed. It’s not fully tested yet, but it’s looking somewhat better already. It’ll increase my data retention a bit (up to 200 samples (4 days ish), rather than the 96  (2 days) I was planning) but space isn’t too much of a concern that way. And partitions allow for more efficient queries.

Future

I still need to write the API interface for it, so you can request an arbitrary number of aggregate/region combinations, but that isn’t too far out. And I’ll provide a sample method for importing into google. In addition, I’m providing the downloaded orderbook as a gzipped csv, for an undetermined period of time (it’s all dependant on space, tbh. and zipped they’re not huge)

I also need to decide how I’m going to winnow the aggregates. The API will only allow for ‘live’ data, as that’s all I’m keeping in redis, but all the data is also being kept in a database table. I’ll likely add a page so you can see how they change over time. To keep the table from becoming an absolute monster, I’m thinking I’ll keep only a week at 30 minute intervals, then averages for the day, if you want to see over a longer time frame.

In case you’re interested, I’m generating a weighted average (price* volume summed up, then divided by the total volume), the max, the min, and a weighted average of the lowest/highest 5% of the orders (lowest for sell, highest for buy) ignoring any price more than 100 times the lowest/highest price. It’ll help when someone puts in a buy order for a million Gilas, at 0.01 isk. Which otherwise completely screws with the average prices.

Other

Oh, and I put together a very simple fleet tracker. Doesn’t do anything other than show you where fleet members are, and what they’re flying. Needs a CREST privilege, and a link you get from the fleet window. And for you to be fleet boss. I may expand it a little in the future. Maybe for invites too. Though that would require me to store the access and refresh token somewhere other than the user’s session.

As normal, all my code is on my github. Might be a little behind the release, but that’s purely down to me not committing it yet.

I’m open to suggestions, if you have them.

Anyway, Steve signing out.

New SDE Migration process

CCP Tellus is a wonderful person.

Until recently, the SDE (Static Data Extract) was a mix of YAML, an MS SQL server backup, and an SQLite Database for universe data. This was a bit of a pain to convert, with a whole bunch of steps.

Now, the SDE is being provided as a bunch of YAML files. Now, these, by themselves, aren’t a hugely useful format. But a single format makes it easier to write a conversion routine. I can convert into several formats, by running a single script (now that I’ve written it) This makes me a very happy man.

https://github.com/fuzzysteve/yamlloader is the repository for the new script. It’s written to convert into mysql, sqlite, postgres, and postgres in a specific schema. And it’s pretty simple to extend to other formats (MS SQL is a possibility, but as I can’t run that on my linux server, I’m not doing it at the moment)

In addition, with the SDE being in YAML, it’s a lot easier to generate the differences between versions. I’ve loaded it into a git repository. This is publicly available. git://www.fuzzwork.co.uk/sde-tq.git

Feel free to look into it 🙂

 

So, short version: I have a new SDE migration process. It may lead to slight differences in output, but the process is a lot easier for me. And if I get hit by a bus, it’s a lot easier for someone else to take over.

Using Crest with Excel: Crest Strikes Back

If you’ve read my older post on it, using Crest with Excel can be a complete pain in the ass. You have to load in a module, then write something in VBA, using it, because VBA doesn’t support json natively, and doing it with a user defined function, well, I never got that far, before throwing my hands up, and walking away.

Now, I got that working, but I didn’t like it. I ran into power query for a while, and it kinda worked, but I didn’t find a way to easily get it to pull a bunch of data, and reprocess it the way I wanted. So not so good for CREST prices.

So I was still on the lookout. And I recently came across xlwings, and it made me very happy indeed. I like Python, and I’m fairly good with it now. So, combining the two seemed like a winner.

After a little playing with it, I got it all working, and I now have an excel sheet, which does CREST lookups, and all it needs, is python, xlwings, and requests installed. So, to the instructions:

Install python. I’ve been using 2.7.10, but 3 should work just fine too. Make sure it’s in your path.

Install pywin32 (I’m using this one, for 2.7, 32 bit)

Using Pip, install xlwings. (you’ll do this in a command prompt)

[where you installed python, like c:\python27]\scripts\pip install xlwings

Using pip, install requests.

[where you installed python, like c:\python27]\scripts\pip install requests

using pip, install requests-security, to stop it bugging you

[where you installed python, like c:\python27]\scripts\pip install requests[security]

Now, go to a directory where you want to create your new workbook, using your command prompt. run the following command (myneatcrestbook, is the name of your new workbook. change it now if you want.):

xlwings quickstart myneatcrestbook

If this fails, it’s probably because your python scripts directory isn’t in your path. So try

[where you installed python, like c:\python27]\scripts\xlwings quickstart myneatcrestbook

With that done, you’ll find a new directory, containing an excel sheet, and a python file. open the excel sheet, and enable macros (it’ll probably just prompt you). You’ll also want to turn on the developers ribbon, in the file->options->customize ribbon option.

On the developers ribbon, click ‘macro security’, and check the ‘trust access to the VBA project object model’. hit ok.

Then click the ‘excel add-ins’ button, and click browse. Browse to [where python is]\Lib\site-packages\xlwings, and select the xlwings.xlam file. hit open, then check the box next to xlwings.

You should now have an xlwings ribbon entry.

 

That done, you can add code to the python file, come back to excel, and hit ‘import python UDFs’, to make them live.

When you want it to reprocess your functions (say, to update the data again, and get new prices) use ctrl+alt+F9

Below you’ll find a copy of a simple script to get the 5% average sell price. called with =crestSellPrices(34,10000002)

from xlwings import xlfunc

import requests

@xlfunc
def crestSellPrices(typeid,regionid):
    response=requests.get('https://public-crest.eveonline.com/market/{}/orders/sell/?type=https://public-crest.eveonline.com/types/{}/'.format(int(regionid),int(typeid)))
    data=response.json()
    count=data['totalCount']
    numberOfSellItems=0
    sellPrice=dict()
    for order in data['items']:
        sellPrice[order['price']]=sellPrice.get(order['price'],0)+order['volume']
        numberOfSellItems+=order['volume']
    # generate statistics
    if numberOfSellItems:
        prices=sorted(sellPrice.keys())
        fivePercent=max(numberOfSellItems/20,1)
        bought=0
        boughtPrice=0
        while bought<fivePercent:
            fivePercentPrice=prices.pop(0)
            if fivePercent > bought+sellPrice[fivePercentPrice]:
                boughtPrice+=sellPrice[fivePercentPrice]*fivePercentPrice;
                bought+=sellPrice[fivePercentPrice]
            else:
                diff=fivePercent-bought
                boughtPrice+=fivePercentPrice*diff
                bought=fivePercent
        averageSellPrice=boughtPrice/bought
    return averageSellPrice

CREST market data

This has been out for a while, but I’ve now got something public ( https://www.fuzzwork.co.uk/market/viewer2/ ) out there, I thought I’d write some words.

Of all the things which have happened in my CSM term, the availability of ‘live’ market data is one of the ones I’m happiest about. It might not seem like a big step, but we now have access to data directly from Tranquility, in a way that has a very low cache time. (I’m expecting some future ones to be even smaller, but that’s more specific data) It’s been a long time coming, and is a step towards removing the ‘requirement’ for the grey area of cache scraping. (Along with the dgmExpressions table being added to the SDE). Grey areas are bad, because they lead people to push the boundaries, and I don’t want to see someone banned for that.

But enough electioneering (Vote for me! 🙂 ), I thought I’d write a little about what it’s doing. I had help in the form of Capt Out’s Crest Explorer which gave me a head start, rather than needing to implement the core ‘Log into the SSO’ bit. After that, it’s just a matter of:

  • Grab the region list. (filtering out the Wormholes)
  • Grab the base market groups.
  • Wait for the user to click something
    • If it’s a market group, grab that specific group, and loop through it for direct children
    • If it’s an item, if a region is selected, grab buy and sell orders, before cramming them into tables
    • If it’s a region, grab the region data for where to get buy and sell order data

That’s pretty much it. There are some twiddly bits (like languages) but the rest of it is just AJAX calls, and looping through the result sets for display. I’d hope the code is fairly simple to understand (if you know any javascript/jquery) but if you have any questions, feel free to throw them my way.

https://github.com/fuzzysteve/CREST-Market-Viewer for the source. MIT licensed.

Eve Vegas

I know I’ve been light on posting. Sorry about that, I’ve just not had much to talk about that I can talk about publicly. At least, not that hasn’t been covered in detail by other people. Sugar Kyle has been great at this. Lots of writing. 🙂

However, I’ve been poked by people that think I should be communicating more (Thanks Sugar) so I’m trying to pick my writing up again.

By now, you’ve probably read a bunch about Eve Vegas, and the shenanigans people got up to there. I enjoyed myself greatly there, and spent a fair quantity of time talking with people, getting a general feeling for how things are going down. With Phoebe, the general feel I got was resignation that, while the jump changes might impact on gameplay they liked, it was for the good of the game in general. While not the most upbeat feel, sometimes you just have to rip the plaster off.

For those that watched the stream, yes, they called my name for a prize. Unfortunately, I was in the roundtable room, where I thought I would be more useful (I caught the presentations later) so I didn’t hear. Oh well, didn’t want those headphones anyway 😉

From a 3rd party dev standpoint, Foxfour’s presentation and roundtable didn’t really reveal particularly much. However, it did point to a renewed appetite within CCP for getting us the tools which we need, to be able to do stuff. While we don’t yet have a timescale for getting authenticated CREST on TQ, he announced that we should be getting it on SiSi in the nearish future, first as part of a trial much like the SSO trial, and then moving onto an automated version. And SSO on TQ, for most people, should be out very soon indeed. Just waiting on a few crossed i’s and dotted t’s.

Capt Out’s CREST presentation is worth a watch. you can find it on Youtube. It’s not a recording of the one from Eve Vegas, instead being a redone one, with better production values. The roundtable room wasn’t good for recordings.

One of the big things I came away with, from talking with people, was a reinforced understanding that Discoverability is one of Eve’s key weaknesses. From ‘how do I find a community to be part of’ to ‘How do I change my own market orders, without going via the order screen’. So many options, with google often being the easiest way to find an answer. The NPE talk from Rise did touch on this, but it’s a little more endemic than that. It’s something that CCP will need to focus on, over time. For both New players, and Veterans.

 

Oh, and I’m running for CSM X. And I’m calling it that for a host of good reasons, not just because it let me update the ‘Vote Steve Ronuken for CSM 9’ tshirt that I have, with a sharpie. It’s an early announcement, but I’m hoping that people read the minutes which should be out shortly, and think I’m doing what they voted for.

 

 

 

 

Zoom zoom 😉