2019 September
-
I have a GeForce 970, it's a bit old, I got it in 2015. I've been having an issue where sometimes in some games, my displays (I have two monitors) will die (go completely black), but the game goes on running for a while (I can hear the audio), then after a couple of minutes or so the audio also stops so I can't really do anything except do a hard reboot. I've tried VNCing into the PC to see what's going on after this time, but it's not accessible over the network so I assume it's dead.
The crashes happen most often while playing Magic Arena (says a lot about the game!), but I've also had it happen once or twice on Starcraft 2 and Borderlands 2. I don't play very demanding games graphically. I can't identify any specific action that causes the crashes.
Some other things I've tried:
- attach my 2nd monitor to onboard graphics, keeping the main monitor connected to the vidcard. In this case when the crash happens, the main monitor goes black, while the 2nd monitor still has a display, but is frozen
- remove my 2nd monitor (I thought I might be overtaxing the graphics card by having two of them??)
- checking for memory (RAM) issues. I did have some RAM issues a couple of months ago, but that turned up to be due to a dirty connector, and it was causing crashes not just in games but in general Windows usage, so I think this is a different issue now
- checking for disk issues. I actually found some bad sectors on my secondary drive and ended up replacing it. But even if I ran everything from my primary (SSD) drive, the graphics crashes still occur
- reinstalling/updating device drivers
- resetting Windows 10 (yup, I went that far)
After all of these, I'm still encountering these crashes so I'm seriously considering replacing the graphics card now, but I'd like the be sure that it's a graphics card issue and not something wonky with the motherboard, as my purchasing decisions will be different if the motherboard was the problem (I might as well get a new PC then). I'm looking for any advice to help me diagnose or confirm the problem.
For reference, my current PC specs:
- Case Silverstone Precision 10SST-PS10B
- Fan 120mm internal aux fan
- PSU Cooler Master B600 V2 600W
- CPU Intel Core i5-4460
- MB ASUS H97ME
- RAM 2x DDR3 Kingston HyperX Fury 8gb 1600
- VC Asus GTX970 STRIX OC 4gb
- SSD Samsung 850Evo 250gb
- SSHD Seagate Firecuda 2tb
2018 February
-
According to the documentation here: https://djangobook.com/syndication-feed-framework/
If link doesnβt return the domain, the syndication framework will insert the domain of the current site, according to your SITE_ID setting
However, I'm trying to generate a feed of magnet: links. The framework doesn't recognize this and attempts to append the SITE_ID, such that the links end up like this (on localhost):
<link>http://localhost:8000magnet:?xt=...</link>
Is there a way to bypass this?
2018 January
-
A client updated their EC2 CentOS 6.4 instance to 6.9 using yum update. After the update and a reboot, the instance wouldn't start up anymore (became unreachable)
The system log (recovered from the AWS console) goes something like this:
Xen Minimal OS! start_info: 0x10d3000(VA) nr_pages: 0xe4f53 shared_inf: 0xeea31000(MA) pt_base: 0x10d6000(VA) nr_pt_frames: 0xd mfn_list: 0x9ab000(VA) mod_start: 0x0(VA) mod_len: 0 flags: 0x300 cmd_line: root=/dev/sda1 ro console=hvc0 4 stack: 0x96a100-0x98a100 MM: Init _text: 0x0(VA) _etext: 0x7b824(VA) _erodata: 0x97000(VA) _edata: 0x9cce0(VA) stack start: 0x96a100(VA) _end: 0x9aa700(VA) start_pfn: 10e6 max_pfn: e4f53 Mapping memory range 0x1400000 - 0xe4f53000 setting 0x0-0x97000 readonly skipped 0x1000 MM: Initialise page allocator for 1807000(1807000)-e4f53000(e4f53000) MM: done Demand map pfns at e4f54000-20e4f54000. Heap resides at 20e4f55000-40e4f55000. Initialising timer interface Initialising console ... done. gnttab_table mapped at 0xe4f54000. Initialising scheduler Thread "Idle": pointer: 0x20e4f55050, stack: 0xe4810000 Thread "xenstore": pointer: 0x20e4f55800, stack: 0xe4820000 xenbus initialised on irq 3 mfn 0xfeffc Thread "shutdown": pointer: 0x20e4f55fb0, stack: 0xe4830000 Dummy main: start_info=0x98a200 Thread "main": pointer: 0x20e4f56760, stack: 0xe4840000 "main" "root=/dev/sda1" "ro" "console=hvc0" "4" vbd 2048 is hd0 ******************* BLKFRONT for device/vbd/2048 ********** backend at /local/domain/0/backend/vbd/2334/2048 629145600 sectors of 512 bytes ************************** vbd 2128 is hd1 ******************* BLKFRONT for device/vbd/2128 ********** backend at /local/domain/0/backend/vbd/2334/2128 314572800 sectors of 512 bytes ************************** [H[J GNU GRUB version 0.97 (3751244K lower / 0K upper memory) [m[4;2H+-------------------------------------------------------------------------+[5;2H|[5;76H|[6;2H|[6;76H|[7;2H|[7;76H|[8;2H|[8;76H|[9;2H|[9;76H|[10;2H|[10;76H|[11;2H|[11;76H|[12;2H|[12;76H|[13;2H|[13;76H|[14;2H|[14;76H|[15;2H|[15;76H|[16;2H|[16;76H|[17;2H+-------------------------------------------------------------------------+[m Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.[5;78H [m[7m[5;3H CentOS (2.6.32-696.18.7.el6.x86_64) [5;75H[m[m[6;3H CentOS (2.6.32-573.el6.x86_64) [6;75H[m[m[7;3H CentOS (2.6.32-358.18.1.el6.x86_64) [7;75H[m[m[8;3H CentOS-6.4-x86_64-GA-03 2.6.32-358.el6.x86_64 [8;75H[m[m[9;3H [9;75H[m[m[10;3H [10;75H[m[m[11;3H [11;75H[m[m[12;3H [12;75H[m[m[13;3H [13;75H[m[m[14;3H [14;75H[m[m[15;3H [15;75H[m[m[16;3H [16;75H[m[16;78H [5;75H[23;4H The highlighted entry will be booted automatically in 1 seconds. [5;75H[H[J Booting 'CentOS (2.6.32-696.18.7.el6.x86_64)' root (hd0) Filesystem type is ext2fs, using whole disk kernel /boot/vmlinuz-2.6.32-696.18.7.el6.x86_64 root=/dev/xvde ro crashkernel=a uto LANG=en_US.UTF-8 KEYTABLE=us initrd /boot/initramfs-2.6.32-696.18.7.el6.x86_64.img ============= Init TPM Front ================ Tpmfront:Error Unable to read device/vtpm/0/backend-id during tpmfront initialization! error = ENOENT Tpmfront:Info Shutting down tpmfront close blk: backend=/local/domain/0/backend/vbd/2334/2048 node=device/vbd/2048 close blk: backend=/local/domain/0/backend/vbd/2334/2128 node=device/vbd/2128
We're all not too experienced with AWS so we don't have much clue what to do. Any idea what we should be looking at or anything to point us in the right direction? The instance was actually a clone of production that we tested the upgrade on before doing it on actual production, so we have a chance to try it again and adjust any settings or whatever before or after the update.
2017 December
-
I have a query using MarkLogic node.js that basically boils down to something like this:
db.documents.query(qb.where(qb.collection('test'))).stream() .on('data', function(row) { console.log("Stream on data"); }) .on('end', function() { console.log("Stream on end"); }) .on('error', function(error) { console.log(error); }) ;
Now, for a certain collection we have in our database, the 'end' function doesn't fire, i.e. I never see "Stream on end" appear in the log. There's no error or anything, processing just stops. It's only for this particular collection, other collections seem fine.
If I query documents in that collection directly using other methods such as qb.value() without using qb.collection(), the end event fires correctly. But once I add qb.collection() into the mix (using qb.and), the end event doesn't fire.
I'm unsure how to debug this, as this is my first time trying to use streams in the nodejs client library. Any advice as to what I can check?
Thanks!
-
How to specify that a column in the schema should be nullable?
I tried adding a nullable attribute:
var myFirstTDE = xdmp.toJSON( { "template": { "context": "/match", "collections": ["source1"], "rows": [ { "schemaName": "soccer", "viewName": "matches", "columns": [ { "name": "id", "scalarType": "long", "val": "id", "nullable": 0 }, { "name": "document", "scalarType": "string", "val": "docUri" }, { "name": "date", "scalarType": "date", "val": "match-date" }, { "name": "league", "scalarType": "string", "val": "league" } ] } ] } } ); tde.validate( [myFirstTDE] );
But this gave me a template error:
"message": "TDE-INVALIDTEMPLATENODE: Invalid extraction template node: fn:doc('')/template/array-node('rows')/object-node()/array-node('columns')/object-node()[1]/number-node('nullable')"
For a template defined using XQuery, adding nullable to the column works:
<column> <name>ISSN</name> <scalar-type>string</scalar-type> <val>Journal/ISSN</val> <nullable>true</nullable> </column>
How to do the same thing using JS/Json?
2017 November
-
This is a follow-up to my question here: https://stackoverflow.com/questions/47449002/marklogic-template-driven-extraction-and-triples-dealing-with-array-nodes/47459250#47459250
So let's say I have a number of documents structured like this:
declareUpdate(); xdmp.documentInsert( '/test/tde.json', { content: { name:'Joe Parent', children: [ { name: 'Bob Child' }, { name: 'Sue Child' }, { name: 'Guy Child' } ] } }, {permissions : xdmp.defaultPermissions(), collections : ['test']})
I want to define a template that would extract triples from these documents defining sibling relationships between the children. For the above example, I would want to extract the following triples (the relationship is two-way):
Bob Child sibling-of Sue Child Bob Child sibling-of Guy Child Sue Child sibling-of Bob Child Sue Child sibling-of Guy Child Guy Child sibling-of Bob Child Guy Child sibling-of Sue Child
How can i set up my template to accomplish this?
Thanks!
-
I've been studying the examples here: https://docs.marklogic.com/guide/semantics/tde#id_25531
I have a set of documents that are structured with a parent name and an array of children nodes with their own names. I want to create a template that generates triples of the form "name1 is-a-parent-of name2". Here's a test I tried, with a sample of the document structure:
declareUpdate(); xdmp.documentInsert( '/test/tde.json', { content: { name:'Joe Parent', children: [ { name: 'Bob Child' }, { name: 'Sue Child' } ] } }, {permissions : xdmp.defaultPermissions(), collections : ['test']}) cts.doc('/test/tde.json') var tde = require("/MarkLogic/tde.xqy"); // Load the user template for user profile rows var template = xdmp.toJSON( { "template":{ "context":"content", "collections": [ "test" ], "triples":[ { "subject": { "val": "xs:string(name)" }, "predicate": { "val": "sem:iri('is-parent-of')" }, "object": { "val": "xs:string(children/name)" } } ] } } ); //tde.validate([template]), tde.templateInsert("/templates/test.tde", template); tde.nodeDataExtract( [cts.doc( '/test/tde.json' )] )
However, the above throws an Exception:
[javascript] TDE-EVALFAILED: tde.nodeDataExtract([cts.doc("/test/tde.json")]) -- Eval for Object='xs:string(children/name)' returns TDE-BADVALEXPRESSION: Invalid val expression: XDMP-CAST: (err:FORG0001) Invalid cast: (fn:doc("/test/tde.json")/content/array-node("children")/object-node()[1]/text("name"), fn:doc("/test/tde.json")/content/array-node("children")/object-node()[2]/text("name")) cast as xs:string?
What is the proper syntax for extracting array nodes into a triple?
2nd somewhat related question: say I also wanted to have triples of the form "child1 is-sibling-of child2". For the example above it would be "Bob Child is-sibling-of Sue Child". What would be the proper syntax for this? I'm not even sure how to begin with this one.
Is TDE even the way to go here? Or is it better to do this programmatically? i.e. on document ingestion, generate those triples inside the document directly?
(If it's relevant, the ML version being used is 9.)
-
I've been testing migrating one of our systems to Marklogic 9 and using the Optics API.
One of our functions involves grouping claims by member_id, member_name and getting the sums and counts, so I did something like this:
var results = op.fromView('test', 'claims') .groupBy(['member_id', 'member_name'], [ op.count('num_claims', 'claim_no'), op.sum('total_amount', 'claim_amount') ]) .orderBy(op.desc('total_amount')) .limit(200) .result() .toArray();
Above works fine. The results are of the form
[ { member_id: 1, member_name: 'Bob', num_claims: 10, total_amount: 500 }, ... ]
However, we also have a field "company", where each claim is filed under a different company. Basically the relevant view columns are claim_no, member_id, member_name, company, claim_amount
I would like to be able to show a column that list the different companies for which the member_id/member_name has filed claims, and how many claims for each company.
i.e. I want my results to be something like:
[ { member_id: 1, member_name: 'Bob', num_claims: 10, total_amount: 500, companies: [ { company: 'Ajax Co', num_claims: 8 }, { company: 'Side Gig', num_claims: 2 } ] }, ... ]
I tried something like this:
results = results.map((member, index, array) => { var companies = op.fromView('test', 'claims') .where(op.eq(op.col('member_id'), member.member_id)) .groupBy('company', [ op.count('num_claims', 'claim_no') ]) .result() .toArray(); member.companies = companies; return member; });
And the output seems correct, but it also executes quite slowly - almost a minute (total number of claim documents is around 120k)
In our previous ML8 implementation, we were pre-generating summary documents for each member - so retrieval was reasonably fast with the downside that whenever we got a bunch of new data, all of the summary documents had to be re-generated. I was hoping that ML9's optic API would make it easier to do the retrieval/grouping/aggregates on the fly so we wouldn't have to do that.
In theory, I could just add company to the groupBy fields, then merge the rows in the result query as needed. But the problem with that approach is that I can't guarantee I'll get the top 200 by total amount (as was my original query)
So, the question is: Is there a better way of doing this with a reasonable execution time? Or should I just stick to pre-generating the summary documents?
-
I have a Marklogic 9 project that I'm configuring with Roxy. I've been following these examples: https://github.com/marklogic-community/roxy/wiki/Adding-Custom-Build-Steps
Basically, I have a server-side JS function that I want to call after deploy content. I have something like this:
# then you would define your new method
def deploy_content # you can optionally call the original original_deploy_content # do your stuff here execute_query(%Q{ xquery version "1.0-ml"; xdmp:javascript-eval('var process = require("/ingestion/process.sjs"); process.postDeployContent();') }, :db_name => @properties["ml.app-name"] + "-content") end
The xquery being called here evaluates fine when executed via query console. But when I call ml local deploy content, I get the following error:
ERROR: 500 "Internal Server Error" ERROR: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>500 Internal Server Error</title> <meta name="robots" content="noindex,nofollow"/> <link rel="stylesheet" href="/error.css"/> </head> <body> <span class="error"> <h1>500 Internal Server Error</h1> <dl> <dt>XDMP-MODNOTFOUND: var process = require("/ingestion/process.sjs"); process.postDeployContent(); -- Module /ingestion/process.sjs not found</dt> <dd></dd> <dt>in [anonymous], at 1:14 [javascript]</dt> <dd></dd> <dt>at 3:6, in xdmp:eval("var process = require(&quot;/ingestion/process.sjs&quot;); proce...") [javascript]</dt> <dd></dd> <dt>in /eval, at 3:6 [1.0-ml]</dt> <dd></dd> </dl> </span> </body> </html>
Why is the module not found when running via xquery from app_specific.rb?
Or... is there a better way to call a JS module function from here. Sorry, I'm not too familiar with the xquery side, so I just called a JS function instead.
-
Basically the title. The client is complaining that when he zooms in, the text labels for the nodes are quite large. Is there a way to keep the node labels at a fixed font size even when zooming in or out?
From the nodes documentation (http://visjs.org/docs/network/nodes.html), there's a scaling.label option, but it doesn't seem to work. I think this is only relevant if I'm using values to scale the nodes.
-
Does Roxy have support for deploying templates for use with Marklogic 9's Template Driven Extraction?
2017 October
-
I'm trying out the Roxy deployer. The Roxy app was created using the default app-type. I setup a new ML 9 database, and I ran "ml local bootstrap" using the default ports (8040 and 8041)
Then I setup a node application. I tried the following (sample code from https://docs.marklogic.com/jsdoc/index.html)
var marklogic = require('marklogic'); var conn = { host: '192.168.33.10', port: 8040, user: 'admin', password: 'admin', authType: 'DIGEST' } var db = marklogic.createDatabaseClient(conn); db.createCollection( '/books', {author: 'Beryl Markham'}, {author: 'WG Sebald'} ) .result(function(response) { console.log(JSON.stringify(response, null, 2)); }, function (error) { console.log(JSON.stringify(error, null, 2)); });
Running the script gave me an error like:
$ node test.js { "message": "write document list: cannot process response with 500 status", "statusCode": 500, "body": "<error:error xsi:schemaLocation=\"http://marklogic.com/xdmp/error error.xsd\" xmlns:error=\"http://marklogic.com/xdmp/error\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <error:code>XDMP-IMPMODNS</error:code>\n <error:name>err:XQST0059</error:name>\n <error:xquery-version>1.0-ml</error:xquery-version>\n <error:message>Import module namespace mismatch</error:message>\n <error:format-string>XDMP-IMPMODNS: (err:XQST0059) Import module namespace http://marklogic.com/rest-api/endpoints/config does not match target namespace http://marklogic.com/rest-api/endpoints/config_DELETE_IF_UNUSED of imported module /MarkLogic/rest-api/endpoints/config.xqy</error:format-string>\n <error:retryable>false</error:retryable>\n <error:expr/>\n <error:data>\n <error:datum>http://marklogic.com/rest-api/endpoints/config</error:datum>\n <error:datum>http://marklogic.com/rest-api/endpoints/config_DELETE_IF_UNUSED</error:datum>\n <error:datum>/MarkLogic/rest-api/endpoints/config.xqy</error:datum>\n </error:data>\n <error:stack>\n <error:frame>\n <error:uri>/roxy/lib/rewriter-lib.xqy</error:uri>\n <error:line>5</error:line>\n <error:column>0</error:column>\n <error:xquery-version>1.0-ml</error:xquery-version>\n </error:frame>\n </error:stack>\n</error:error>\n" }
If I change the port to 8000 (the default appserver that inserts into Documents), the node function executes correctly as expected. I'm not sure if I need to configure anything else with the Roxy-created appserver so that it works with the node.js application.
I'm not sure where the "DELETE_IF_UNUSED" part in the error message is coming from either. There doesn't seem to be any such text in the configuration files generated by Roxy.
Edit: When accessing 192.168.33.10:8040 via the browser, I get a an xml with a similar error:
<error:error xsi:schemaLocation="http://marklogic.com/xdmp/error error.xsd" xmlns:error="http://marklogic.com/xdmp/error" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <error:code>XDMP-IMPMODNS</error:code> <error:name>err:XQST0059</error:name> <error:xquery-version>1.0-ml</error:xquery-version> <error:message>Import module namespace mismatch</error:message> <error:format-string>XDMP-IMPMODNS: (err:XQST0059) Import module namespace http://marklogic.com/rest-api/endpoints/config does not match target namespace http://marklogic.com/rest-api/endpoints/config_DELETE_IF_UNUSED of imported module /MarkLogic/rest-api/endpoints/config.xqy</error:format-string> <error:retryable>false</error:retryable> <error:expr/> <error:data> <error:datum>http://marklogic.com/rest-api/endpoints/config</error:datum> <error:datum>http://marklogic.com/rest-api/endpoints/config_DELETE_IF_UNUSED</error:datum> <error:datum>/MarkLogic/rest-api/endpoints/config.xqy</error:datum> </error:data> <error:stack> <error:frame> <error:uri>/roxy/lib/rewriter-lib.xqy</error:uri> <error:line>5</error:line> <error:column>0</error:column> <error:xquery-version>1.0-ml</error:xquery-version> </error:frame> </error:stack> </error:error>
If it matters, MarkLogic version is 9.0-3.1. It's a fresh install too.
Any advice?
-
I'm trying to migrate one of my dev envts from ML8 to ML9. I have an import script that successfully works on the ML8 version, but there's an error when I try running it against the ML9 database. The ML9 version is 9.0.3.1. The MLCP version is 9.0.3
My MLCP options file is as follows:
import -host 192.168.33.10 -port 8041 -username admin -password admin -input_file_path d:\maroon\data\mbastest.csv -mode local -input_file_type delimited_text -uri_id ClientId -output_uri_prefix /test/records/ -output_uri_suffix .json -document_type json -transform_module /ingestion/transform.js -transform_function testTransform -transform_param test -content_encoding windows-1252 -thread_count 1
Here's the output of a test run with only 2 records in the test CSV file:
17/10/30 14:07:33 INFO contentpump.LocalJobRunner: Content type: JSON 17/10/30 14:07:33 INFO contentpump.ContentPump: Job name: local_455168344_1 17/10/30 14:07:33 INFO contentpump.FileAndDirectoryInputFormat: Total input paths to process : 1 17/10/30 14:07:38 WARN contentpump.TransformWriter: Failed document /test/records/31.json 17/10/30 14:07:38 WARN contentpump.TransformWriter: <error:format-string xmlns:error="http://marklogic.com/xdmp/error" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">XDMP-UNEXPECTED: (err:XPST0003) Unexpected token syntax error, unexpected QName_, expecting $end or SemiColon_</error:format-string> 17/10/30 14:07:38 WARN contentpump.TransformWriter: Failed document /test/records/32.json 17/10/30 14:07:38 WARN contentpump.TransformWriter: <error:format-string xmlns:error="http://marklogic.com/xdmp/error" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">XDMP-UNEXPECTED: (err:XPST0003) Unexpected token syntax error, unexpected QName_, expecting $end or SemiColon_</error:format-string> 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: completed 100% 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: com.marklogic.mapreduce.MarkLogicCounter: 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: INPUT_RECORDS: 2 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: OUTPUT_RECORDS: 2 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: OUTPUT_RECORDS_COMMITTED: 0 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: OUTPUT_RECORDS_FAILED: 2 17/10/30 14:07:38 INFO contentpump.LocalJobRunner: Total execution time: 5 sec
If I remove the transform params, the import works fine.
I thought it might be a parsing issue with my transform module itself, so I tried replacing it with the following example from the documentation:
// Add a property named "NEWPROP" to any JSON input document. // Otherwise, input passes through unchanged. function addProp(content, context) { const propVal = (context.transform_param == undefined) ? "UNDEFINED" : context.transform_param; if (xdmp.nodeKind(content.value) == 'document' && content.value.documentFormat == 'JSON') { // Convert input to mutable object and add new property const newDoc = content.value.toObject(); newDoc.NEWPROP = propVal; // Convert result back into a document content.value = xdmp.unquote(xdmp.quote(newDoc)); } return content; }; exports.addProp = addProp;
(Of course I changed the params in the MLCP options file accordingly)
The issue still persists even with just this test function.
Any advice?
-
I've been testing out the WebView component, but I can't seem to get it to render things.
Sample here: https://snack.expo.io/r1oje4C3-
export default class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.paragraph}> Change code in the editor and watch it change on your phone! Save to get a shareable url. You get a new url each time you save. </Text> <WebView source={{html: '<p>Here I am</p>'}} /> <WebView source={{ uri: 'http://www.google.com'}} /> </View> ); } }
When running the above example in Expo, neither of the two WebView components seem to render. What am I doing wrong?
-
I'm trying out vis.js for generating graph visualization. For every edge in my graph, I have a number that describes the strength of the connection between two nodes. I'd like to render the vis.js graph such that the nodes that have stronger relationships (higher edge values) are closer together (edge length is shorter).
I've set the relationship strength (an integer) as the "value" attribute for each edge, but this only seems to make the edge lines slightly thicker for higher values.
What options should I be looking at? I'm not sure if this is supposed to be a function of vis's physics-based stabilization.
Thanks for advice!
2017 September
-
I have an HTML page with around 10 charts generated by chart.js (so these are canvas elements). I want to be able to export the page content into a PDF file.
I've tried using jsPDF's .fromHTML function, but it doesn't seem to support exporting the canvas contents. (Either that or I'm doing it wrong). I just did something like:
$(".test").click(function() { var doc = new jsPDF() doc.fromHTML(document.getElementById("testExport")); doc.save('a4.pdf') });
Any alternative approaches would be appreciated.
-
I'm using ML8. I have a bunch of json documents in the database. Some documents have a certain property "summaryData", something like:
{ ...(other stuff)... summaryData: { count: 100, total: 10000, summaryDate: (date value) } }
However, not all documents have this property. I'd like to construct an SJS query to retrieve those documents that don't have this property defined. If it was SQL, I guess the equivalent would be something like "WHERE summaryData IS NULL"
I wasn't sure what to search for in the docs. Any advise would be helpful.
-
I'm using ML8 and Node.js. The documentation here: http://docs.marklogic.com/guide/node-dev/documents#id_68765 describes how to do conditional updates in ML using the versionId field.
But for example if I want to do a conditional update on a different field, is it possible?
My scenario is: I have JSON documents with elements assignedTo and assignDate (where assignDate is set to current date every time a new value is set to assignedTo)
Now, for my "Assign" operation, I would like to make sure that no one else has changed the assignedTo/assignDate fields between the time I read the document and when I perform the update. I don't care if other fields in the same document have been updated or not - if other fields have been updated, I can still proceed with the Assign operation (hence I cannot use the versionId approach, since that covers the whole document)
How can this be done?
2017 August
-
Sorry if the title is unclear, but I'm finding my problem hard to explain concisely. I have a number of JSON documents with structure like this:
{ "count": 100, "groups": [ { "name": "group A", "count": 12 }, { "name": "group B", "count": 22 }, { "name": "group C", "count": 7 } ] }
Basically, the document has an item count plus a breakdown of that count into smaller groups. So this record represents a collection of 100 items, of which 12 are from group A, 22 are from group B, 7 are from group C.
Now, I have an element range index on "count" and a bunch of such documents. I'd like to be able to sort by any of the options:
- sort by total count (descending or ascending)
- sort by group A count (descending or ascending)
- sort by group B count (descending or ascending)
- sort by group C count (descending or ascending)
I've tried
.orderBy(qb.sort("count", "descending"))
This seems to be sort by the total cost (the one in the document root), but I'm not sure if that's always true or I need to specify something else to guarantee it always gets that particular one.
For sorting by a specific group, I have no idea how to specify it.
Any advice?
-
For the past 2 weeks or so now, I've been having issues with my ISP (I'm not in the US, so it's not Comcast or whatever).
Specifically, I often lose access to Google-related sites/services: the search engine itself, Gmail, Google Docs/Drive, Youtube, Google Keep are the ones I use most often. I've restarted my router multiple times already over the past couple of weeks. Same issue is encountered on my desktop (wired), laptop, iPad, phone (when connected over wifi) etc.
I know it's related to my ISP because (a) I've read some other users experiencing similar issues on social media; (b) if I tether to my phone's data connection, the problem isn't encountered.
I don't have a proxy server configured. I used to be using Google DNS but I've also tried switching back to my ISP's default DNS with no luck.
What's the best way for me to figure out what the issue is? (Preferably in a way that I can bring it up to my ISP so that they know specifically what they need to fix - talking to their customer service is a tedious experience so I've decided to figure out as much as possible before going back to them)
Screenshot of the error from the browser:
Pings seem ok right now (although if I leave ping -t running for a while, there's an occasional "Request timed out"):