Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge development #18

Merged
merged 35 commits into from
Jul 26, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
d26e6be
Fix SetCursorTimeout. See https://jira.mongodb.org/browse/SERVER-24899
BenLubar Jul 6, 2016
8183c81
add test case for no-timeout cursors
BenLubar Jul 14, 2016
20e84f3
run 'go fmt' using go 1.8
jameinel Jun 6, 2017
2eb5d1c
Add the test cases that show O(N^2) performance
jameinel Jun 6, 2017
9428095
Cache conversion from token to TXN ObjectId.
jameinel Jun 6, 2017
2491579
Include preloading of a bunch of transactions.
jameinel Jun 6, 2017
924d95b
Batch the preload into chunks.
jameinel Jun 6, 2017
a3e83d6
try to reuse the info.Queue conversion has a negative performance effect
jameinel Jun 6, 2017
b5ff827
Revert "try to reuse the info.Queue conversion has a negative perform…
jameinel Jun 6, 2017
9f347aa
Merge branch 'txn-id-caching' of https://github.com/jameinel/mgo into…
domodwyer Jun 15, 2017
2498227
Merge pull request #10 from globalsign/jameinel-txn-id-caching
domodwyer Jun 15, 2017
3fb76e6
fix running test on mongo 3.2
fmpwizard Jul 2, 2017
532c5ea
Added Hint and MaxTimeMS support to Count()
fmpwizard Jul 2, 2017
f84c737
Both features only wrk starting on 2.6
fmpwizard Jul 3, 2017
652a534
See if cleaning up mongo instances fixes the build
fmpwizard Jul 3, 2017
f9d8459
Set an upper limit of how large we will let txn-queues grow.
jameinel Jul 4, 2017
5a7588b
fix json time zone
reenjii Jul 4, 2017
9437503
Merge branch 'master' into development
domodwyer Jul 4, 2017
f89b2fc
Add Runner.SetOptions to control maximum queue length.
jameinel Jul 5, 2017
c1dc6dc
Merge branch 'reenjii/fix-json-timezone' into bugfix/reenjii-fix-json…
domodwyer Jul 5, 2017
1563394
Credit @Reenjii in the README.
domodwyer Jul 5, 2017
0cfadd5
Merge remote-tracking branch 'benlubar/no-timeout' into bugfix/benlub…
domodwyer Jul 5, 2017
4b45f77
Credit @BenLubar in README.
domodwyer Jul 5, 2017
d6025cb
Merge remote-tracking branch 'jameinel/max-txn-queue-length' into bug…
domodwyer Jul 5, 2017
71bfa1c
Add link to improvement by @jameinel
domodwyer Jul 5, 2017
a3bee14
Merge pull request #15 from globalsign/bugfix/benlubar-cursor-timeouts
domodwyer Jul 5, 2017
37e06bc
Merge branch 'development' into bugfix/reenjii-fix-json-timezone
domodwyer Jul 5, 2017
a724dca
Merge pull request #13 from globalsign/bugfix/reenjii-fix-json-timezone
domodwyer Jul 5, 2017
d47fb18
Merge branch 'development' into bugfix/jameinel-max-txn-queue-length
domodwyer Jul 5, 2017
73a9463
Merge pull request #16 from globalsign/bugfix/jameinel-max-txn-queue-…
domodwyer Jul 5, 2017
d3b6a6e
Merge pull request #11 from jameinel/txn-preload
domodwyer Jul 5, 2017
8771df2
Merge branch 'count_hint' of github.com:fmpwizard/mgo into feature/fm…
domodwyer Jul 26, 2017
e7068d7
Credit @fmpwizard in the README.
domodwyer Jul 26, 2017
f470795
Merge pull request #17 from globalsign/feature/fmpwizard-count-maxtim…
domodwyer Jul 26, 2017
863d0d8
Merge branch 'development' into merge-development
domodwyer Jul 26, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,5 +46,6 @@ script:
- (cd bson && go test -check.v)
- go test -check.v -fast
- (cd txn && go test -check.v)
- make stopdb

# vim:sw=4:ts=4:et
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,23 @@ Further PR's (with tests) are welcome, but please maintain backwards compatibili
* Support majority read concerns ([details](https://github.com/globalsign/mgo/pull/2))
* Improved connection handling ([details](https://github.com/globalsign/mgo/pull/5))
* Hides SASL warnings ([details](https://github.com/globalsign/mgo/pull/7))
* Improved multi-document transaction performance ([details](https://github.com/globalsign/mgo/pull/10), [more](https://github.com/globalsign/mgo/pull/11))
* Integration tests run against newest MongoDB 3.2 releases ([details](https://github.com/globalsign/mgo/pull/4))
* Support for partial indexes ([detials](https://github.com/domodwyer/mgo/commit/5efe8eccb028238d93c222828cae4806aeae9f51))
* Fixes timezone handling ([details](https://github.com/go-mgo/mgo/pull/464))
* Integration tests run against newest MongoDB 3.2 releases ([details](https://github.com/globalsign/mgo/pull/4))
* Improved multi-document transaction performance ([details](https://github.com/globalsign/mgo/pull/10), [more](https://github.com/globalsign/mgo/pull/11), [more](https://github.com/globalsign/mgo/pull/16))
* Fixes cursor timeouts ([detials](https://jira.mongodb.org/browse/SERVER-24899))
* Support index hints and timeouts for count queries ([details](https://github.com/globalsign/mgo/pull/17))

---

### Thanks to
* @BenLubar
* @carter2000
* @cezarsa
* @eaglerayp
* @drichelson
* @eaglerayp
* @fmpwizard
* @jameinel
* @Reenjii
* @smoya
* @wgallagher
7 changes: 5 additions & 2 deletions bson/json.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"encoding/base64"
"fmt"
"strconv"
"strings"
"time"

"github.com/globalsign/mgo/internal/json"
Expand Down Expand Up @@ -156,7 +157,7 @@ func jencBinaryType(v interface{}) ([]byte, error) {
return fbytes(`{"$binary":"%s","$type":"0x%x"}`, out, in.Kind), nil
}

const jdateFormat = "2006-01-02T15:04:05.999Z"
const jdateFormat = "2006-01-02T15:04:05.999Z07:00"

func jdecDate(data []byte) (interface{}, error) {
var v struct {
Expand All @@ -170,13 +171,15 @@ func jdecDate(data []byte) (interface{}, error) {
v.S = v.Func.S
}
if v.S != "" {
var errs []string
for _, format := range []string{jdateFormat, "2006-01-02"} {
t, err := time.Parse(format, v.S)
if err == nil {
return t, nil
}
errs = append(errs, err.Error())
}
return nil, fmt.Errorf("cannot parse date: %q", v.S)
return nil, fmt.Errorf("cannot parse date: %q [%s]", v.S, strings.Join(errs, ", "))
}

var vn struct {
Expand Down
11 changes: 11 additions & 0 deletions bson/json_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,18 @@ var jsonTests = []jsonTest{
{
a: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.UTC),
b: `{"$date":"2016-05-15T01:02:03.004Z"}`,
}, {
a: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.FixedZone("CET", 60*60)),
b: `{"$date":"2016-05-15T01:02:03.004+01:00"}`,
}, {
b: `{"$date": {"$numberLong": "1002"}}`,
c: time.Date(1970, 1, 1, 0, 0, 1, 2e6, time.UTC),
}, {
b: `ISODate("2016-05-15T01:02:03.004Z")`,
c: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.UTC),
}, {
b: `ISODate("2016-05-15T01:02:03.004-07:00")`,
c: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.FixedZone("PDT", -7*60*60)),
}, {
b: `new Date(1000)`,
c: time.Date(1970, 1, 1, 0, 0, 1, 0, time.UTC),
Expand Down Expand Up @@ -180,6 +186,11 @@ func (s *S) TestJSON(c *C) {
value = zerov.Elem().Interface()
}
c.Logf("Loaded: %#v", value)
if ctime, ok := item.c.(time.Time); ok {
// time.Time must be compared with time.Time.Equal and not reflect.DeepEquals
c.Assert(ctime.Equal(value.(time.Time)), Equals, true)
continue
}
c.Assert(value, DeepEquals, item.c)
}
}
47 changes: 28 additions & 19 deletions session.go
Original file line number Diff line number Diff line change
Expand Up @@ -3281,20 +3281,23 @@ func prepareFindOp(socket *mongoSocket, op *queryOp, limit int32) bool {
}

find := findCmd{
Collection: op.collection[nameDot+1:],
Filter: op.query,
Projection: op.selector,
Sort: op.options.OrderBy,
Skip: op.skip,
Limit: limit,
MaxTimeMS: op.options.MaxTimeMS,
MaxScan: op.options.MaxScan,
Hint: op.options.Hint,
Comment: op.options.Comment,
Snapshot: op.options.Snapshot,
OplogReplay: op.flags&flagLogReplay != 0,
Collation: op.options.Collation,
ReadConcern: readLevel{level: op.readConcern},
Collection: op.collection[nameDot+1:],
Filter: op.query,
Projection: op.selector,
Sort: op.options.OrderBy,
Skip: op.skip,
Limit: limit,
MaxTimeMS: op.options.MaxTimeMS,
MaxScan: op.options.MaxScan,
Hint: op.options.Hint,
Comment: op.options.Comment,
Snapshot: op.options.Snapshot,
Collation: op.options.Collation,
Tailable: op.flags&flagTailable != 0,
AwaitData: op.flags&flagAwaitData != 0,
OplogReplay: op.flags&flagLogReplay != 0,
NoCursorTimeout: op.flags&flagNoCursorTimeout != 0,
ReadConcern: readLevel{level: op.readConcern},
}

if op.limit < 0 {
Expand Down Expand Up @@ -4083,10 +4086,12 @@ func (iter *Iter) getMoreCmd() *queryOp {
}

type countCmd struct {
Count string
Query interface{}
Limit int32 ",omitempty"
Skip int32 ",omitempty"
Count string
Query interface{}
Limit int32 ",omitempty"
Skip int32 ",omitempty"
Hint bson.D `bson:"hint,omitempty"`
MaxTimeMS int `bson:"maxTimeMS,omitempty"`
}

// Count returns the total number of documents in the result set.
Expand All @@ -4108,8 +4113,12 @@ func (q *Query) Count() (n int, err error) {
if query == nil {
query = bson.D{}
}
// not checking the error because if type assertion fails, we
// simply want a Zero bson.D
hint, _ := q.op.options.Hint.(bson.D)
result := struct{ N int }{}
err = session.DB(dbname).Run(countCmd{cname, query, limit, op.skip}, &result)
err = session.DB(dbname).Run(countCmd{cname, query, limit, op.skip, hint, op.options.MaxTimeMS}, &result)

return result.N, err
}

Expand Down
95 changes: 94 additions & 1 deletion session_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1275,6 +1275,49 @@ func (s *S) TestCountSkipLimit(c *C) {
c.Assert(n, Equals, 4)
}

func (s *S) TestCountMaxTimeMS(c *C) {
if !s.versionAtLeast(2, 6) {
c.Skip("SetMaxTime only supported in 2.6+")
}

session, err := mgo.Dial("localhost:40001")
c.Assert(err, IsNil)
defer session.Close()

coll := session.DB("mydb").C("mycoll")

ns := make([]int, 100000)
for _, n := range ns {
err := coll.Insert(M{"n": n})
c.Assert(err, IsNil)
}
_, err = coll.Find(M{"n": M{"$gt": 1}}).SetMaxTime(1 * time.Millisecond).Count()
e := err.(*mgo.QueryError)
// We hope this query took longer than 1 ms, which triggers an error code 50
c.Assert(e.Code, Equals, 50)

}

func (s *S) TestCountHint(c *C) {
if !s.versionAtLeast(2, 6) {
c.Skip("Not implemented until mongo 2.5.5 https://jira.mongodb.org/browse/SERVER-2677")
}

session, err := mgo.Dial("localhost:40001")
c.Assert(err, IsNil)
defer session.Close()

coll := session.DB("mydb").C("mycoll")
err = coll.Insert(M{"n": 1})
c.Assert(err, IsNil)

_, err = coll.Find(M{"n": M{"$gt": 1}}).Hint("does_not_exists").Count()
e := err.(*mgo.QueryError)
// If Hint wasn't doing anything, then Count would ignore the non existent index hint
// and return the normal ount. But we instead get an error code 2: bad hint
c.Assert(e.Code, Equals, 2)
}

func (s *S) TestQueryExplain(c *C) {
session, err := mgo.Dial("localhost:40001")
c.Assert(err, IsNil)
Expand Down Expand Up @@ -1673,7 +1716,7 @@ func (s *S) TestResumeIter(c *C) {
c.Assert(len(batch), Equals, 0)
}

var cursorTimeout = flag.Bool("cursor-timeout", false, "Enable cursor timeout test")
var cursorTimeout = flag.Bool("cursor-timeout", false, "Enable cursor timeout tests")

func (s *S) TestFindIterCursorTimeout(c *C) {
if !*cursorTimeout {
Expand Down Expand Up @@ -1717,6 +1760,56 @@ func (s *S) TestFindIterCursorTimeout(c *C) {
c.Assert(iter.Err(), Equals, mgo.ErrCursor)
}

func (s *S) TestFindIterCursorNoTimeout(c *C) {
if !*cursorTimeout {
c.Skip("-cursor-timeout")
}
session, err := mgo.Dial("localhost:40001")
c.Assert(err, IsNil)
defer session.Close()

session.SetCursorTimeout(0)

type Doc struct {
Id int "_id"
}

coll := session.DB("test").C("test")
coll.Remove(nil)
for i := 0; i < 100; i++ {
err = coll.Insert(Doc{i})
c.Assert(err, IsNil)
}

session.SetBatch(1)
iter := coll.Find(nil).Iter()
var doc Doc
if !iter.Next(&doc) {
c.Fatalf("iterator failed to return any documents")
}

for i := 10; i > 0; i-- {
c.Logf("Sleeping... %d minutes to go...", i)
time.Sleep(1*time.Minute + 2*time.Second)
}

// Drain any existing documents that were fetched.
if !iter.Next(&doc) {
c.Fatalf("iterator failed to return previously cached document")
}
for i := 1; i < 100; i++ {
if !iter.Next(&doc) {
c.Errorf("iterator failed on iteration %d", i)
break
}
}
if iter.Next(&doc) {
c.Error("iterator returned more than 100 documents")
}

c.Assert(iter.Err(), IsNil)
}

func (s *S) TestTooManyItemsLimitBug(c *C) {
if *fast {
c.Skip("-fast")
Expand Down
Loading